by shae
How it works This Lead Capture & Auto-Qualification workflow transforms raw leads into qualified prospects through intelligent automation. Here's the high-level flow: Lead Intake → Data Validation → Enrichment → Scoring → Smart Routing → CRM Integration & Notifications The system captures leads from any source, validates the data, enriches it with company intelligence, scores based on qualification criteria, and automatically routes high-value prospects to sales while nurturing lower-priority leads. Set up steps Time to set up: Approximately 30-45 minutes Prerequisites: Active accounts with HubSpot, Clearbit, Apollo, and Slack Step 1: Import Workflow (2 minutes) Copy the workflow JSON and import into your n8n instance The workflow will appear with all nodes and sticky note documentation Step 2: Configure Environment Variables (5 minutes) Set these in your n8n environment: APOLLO_API_URL SLACK_SALES_CHANNEL_ID SLACK_MARKETING_CHANNEL_ID CRM_ASSIGNMENT_URL Step 3: Set Up API Credentials (15 minutes) Create credential connections for: Clearbit API (enrichment) Apollo API (HTTP Header Auth) HubSpot API (CRM integration) Slack API (notifications) Step 4: Customize Scoring Logic (10 minutes) Review the qualification criteria in the Code node Adjust scoring weights based on your ideal customer profile Modify industry targeting and company size thresholds Step 5: Test & Activate (8 minutes) Send test webhook requests to validate the flow Verify CRM contact creation and Slack notifications Activate the workflow for live lead processing
by Bhavy Shekhaliya
Overview AI-powered n8n workflow that creates viral LinkedIn posts by learning from successful content. Features two modules: (1) Telegram-based scraper that builds a vector database of viral LinkedIn posts, and (2) Web form that generates optimized posts using multi-agent AI with RAG (Retrieval-Augmented Generation) from your curated viral content library. Key Capabilities: Scrapes LinkedIn post content via Telegram bot Stores posts in Supabase vector database with OpenAI embeddings 3-agent system analyzes hooks, structures outlines, and generates posts RAG integration retrieves similar viral posts for pattern matching Auto-publishes to LinkedIn or provides formatted output How It Works Module 1: Viral Post Collection (Telegram Bot) Step 1: URL Validation User sends LinkedIn post URL to Telegram bot Workflow validates URL contains "linkedin.com" Shows typing indicator for better UX Step 2: Content Scraping HTTP request fetches post HTML CSS selector extracts main commentary: [data-test-id="main-feed-activity-card__commentary"] Handles scraping failures with error messages Step 3: Vector Storage Converts post text to OpenAI embeddings (text-embedding-ada-002) Stores in Supabase linkedin_post table with vector indexing Sends success confirmation via Telegram Module 2: AI Post Generation (Web Form) Stage 1: Hook Analysis Agent Input**: User-provided hook text Process**: AI extracts topic, niche/industry, emotional tone, and 3-5 key points Output**: Structured JSON with analyzed elements Models**: GPT-4o-mini or Gemini 2.5-flash (dual fallback) Stage 2: Post Structure Agent Input**: Analyzed hook data Process**: Creates 5-section outline (Hook, Problem, Value/Lesson, Solution, CTA) Output**: Structured framework for final post Models**: GPT-4o-mini or Gemini 2.5-flash Stage 3: Post Generator Agent (RAG) Input**: Post structure + topic RAG Process**: Queries Supabase vector store for 5 most similar viral posts Analyzes patterns: hooks, storytelling, CTAs, engagement metrics Identifies optimal length, formatting, and emotional triggers Output**: Complete LinkedIn post applying viral patterns Models**: GPT-4o-mini or Gemini 2.5-flash with GPT-5-NANO for structured output Stage 4: Publication Auto-publishes to LinkedIn via API Or returns formatted post text for manual posting How To Use Setup 1. Configure Supabase Vector Database Create Supabase project Create table: linkedin_post with vector column (1536 dimensions for OpenAI embeddings) Enable vector extension: CREATE EXTENSION vector; Update credentials in "Upload Document" and "Supabase Vector Store" nodes 2. Set Up Telegram Bot (Module 1) Create bot via @BotFather Get bot token and update "On Telegram Message" credentials Start bot and get your chat ID Activate workflow 3. Configure OpenAI API Add API key to "Embeddings" nodes (both modules) Configure language model credentials (GPT-4o-mini, GPT-5-NANO) 4. Set Up LinkedIn API (Optional for Module 2) Create LinkedIn app with member permissions Configure OAuth2 credentials in "Create a post" node Or remove node to get text output only 5. Access Web Form Get form URL from "LinkedIn Form" webhook Bookmark for easy access
by Rohit Dabra
Jira MCP Server Integration with n8n Overview Transform your Jira project management with the power of AI and automation! This n8n workflow template demonstrates how to create a seamless integration between chat interfaces, AI processing, and Jira Software using MCP (Model Context Protocol) server architecture. What This Workflow Does Chat-Driven Automation**: Trigger Jira operations through simple chat messages AI-Powered Issue Creation**: Automatically generate detailed Jira issues with descriptions and acceptance criteria Complete Jira Management**: Get issue status, changelogs, comments, and perform full CRUD operations Memory Integration**: Maintain context across conversations for smarter automations Zero Manual Entry**: Eliminate repetitive data entry and human errors Key Features ✅ Natural Language Processing: Use Google Gemini to understand and process chat requests ✅ MCP Server Integration: Secure, efficient communication with Jira APIs ✅ Comprehensive Jira Operations: Create, read, update, delete issues and comments ✅ Smart Memory: Context-aware conversations for better automation ✅ Multi-Action Workflow: Handle multiple Jira operations from a single trigger Demo Video 🎥 Watch the Complete Demo: Automate Jira Issue Creation with n8n & AI | MCP Server Integration Prerequisites Before setting up this workflow, ensure you have: n8n instance** (cloud or self-hosted) Jira Software** account with appropriate permissions Google Gemini API** credentials MCP Server** configured and accessible Basic understanding of n8n workflows Setup Guide Step 1: Import the Workflow Copy the workflow JSON from this template In your n8n instance, click Import > From Text Paste the JSON and click Import Step 2: Configure Google Gemini Open the Google Gemini Chat Model node Add your Google Gemini API credentials Configure the model parameters: Model: gemini-pro (recommended) Temperature: 0.7 for balanced creativity Max tokens: As per your requirements Step 3: Set Up MCP Server Connection Configure the MCP Client node: Server URL: Your MCP server endpoint Authentication: Add required credentials Timeout: Set appropriate timeout values Ensure your MCP server supports Jira operations: Issue creation and retrieval Comment management Status updates Changelog access Step 4: Configure Jira Integration Set up Jira credentials in n8n: Go to Credentials > Add Credential Select Jira Software API Add your Jira instance URL, email, and API token Configure each Jira node: Get Issue Status: Set project key and filters Create Issue: Define issue type and required fields Manage Comments: Set permissions and content rules Step 5: Memory Configuration Configure the Simple Memory node: Set memory key for conversation context Define memory retention duration Configure memory scope (user/session level) Step 6: Chat Trigger Setup Configure the When Chat Message Received trigger: Set up webhook URL or chat platform integration Define message filters if needed Test the trigger with sample messages Usage Examples Creating a Jira Issue Chat Input: Can you create an issue in Jira for Login Page with detailed description and acceptance criteria? Expected Output: New Jira issue created with structured description Automatically generated acceptance criteria Proper labeling and categorization Getting Issue Status Chat Input: What's the status of issue PROJ-123? Expected Output: Current issue status Last updated information Assigned user details Managing Comments Chat Input: Add a comment to issue PROJ-123: "Ready for testing in staging environment" Expected Output: Comment added to specified issue Notification sent to relevant team members Customization Options Extending Jira Operations Add more Jira operations (transitions, watchers, attachments) Implement custom field handling Create multi-project workflows AI Enhancement Fine-tune Gemini prompts for better issue descriptions Add custom validation rules Implement approval workflows Integration Expansion Connect to Slack, Discord, or Teams Add email notifications Integrate with time tracking tools Troubleshooting Common Issues MCP Server Connection Failed Verify server URL and credentials Check network connectivity Ensure MCP server is running and accessible Jira API Errors Validate Jira credentials and permissions Check project access rights Verify issue type and field configurations AI Response Issues Review Gemini API quotas and limits Adjust prompt engineering for better results Check model parameters and settings Performance Tips Optimize memory usage for long conversations Implement rate limiting for API calls Use error handling and retry mechanisms Monitor workflow execution times Best Practices Security: Store all credentials securely using n8n's credential system Testing: Test each node individually before running the complete workflow Monitoring: Set up alerts for workflow failures and API limits Documentation: Keep track of custom configurations and modifications Backup: Regular backup of workflow configurations and credentials Happy Automating! 🚀 This workflow template is designed to boost productivity and eliminate manual Jira management tasks. Customize it according to your team's specific needs and processes.
by Rully Saputra
AI Job Matcher with Decodo, Gemini AI & Resume Analysis Sign up for Decodo — get better pricing here Who’s it for This workflow is built for job seekers, recruiters, founders, automation builders, and data engineers who want to automate job discovery and intelligently match job listings against resumes using AI. It’s ideal for anyone building job boards, candidate matching systems, hiring pipelines, or personal job alert automations using n8n. What this workflow does This workflow automatically scrapes job listings from SimplyHired using Decodo residential proxies, extracts structured job data with a Gemini AI agent, downloads resumes from Google Drive, extracts and summarizes resume content, and surfaces the most relevant job opportunities. The workflow stores structured results in a database and sends real-time notifications via Telegram, creating a scalable and low-maintenance AI-powered job matching pipeline. How it works A schedule trigger starts the workflow automatically Decodo fetches job search result pages from SimplyHired Job card HTML is extracted from the page A Gemini AI agent converts raw HTML into structured job data Resume PDFs are downloaded from Google Drive Resume text is extracted from PDF files A Gemini AI agent summarizes key resume highlights Job and resume data are stored in a database Matching job alerts are sent via Telegram How to set up Add your Decodo API credentials Add your Google Gemini API key Connect Google Drive for resume access Configure your Telegram bot Set up your database (Google Sheets by default) Update the job search URL with your keywords and location Requirements Self-hosted n8n instance Decodo account (community node) Google Gemini API access Google Drive access Telegram Bot token Google Sheets or another database > Note: This template uses a community node (Decodo) and is intended for self-hosted n8n only. How to customize the workflow Replace SimplyHired with another job board or aggregator Add job–resume matching or scoring logic Extend the resume summary with custom fields Swap Google Sheets for PostgreSQL, Supabase, or Airtable Route notifications to Slack, Email, or Webhooks Add pagination or multi-resume processing
by WeblineIndia
Real-Time WooCommerce Return Surge Detection with Slack Alerts & Airtable Logging This n8n workflow monitors WooCommerce refund activity to detect unusual spikes in product returns at the SKU level. It compares return volumes across rolling 24-hour windows, alerts teams in Slack when defined thresholds are exceeded and logs all detected events into Airtable for tracking and analysis. 🚀 Quick Start – Get This Running Fast Import the workflow into n8n. Connect your WooCommerce API credentials. Configure Slack and Airtable credentials. Set your preferred schedule interval. Activate the workflow and start monitoring returns automatically. What It Does This workflow is designed to automatically detect abnormal return behavior in a WooCommerce store. On every scheduled run, it fetches recent orders and refunds directly from the WooCommerce REST API. Refund records are mapped back to their original orders to accurately identify affected SKUs. Using a rolling time-window comparison, the workflow calculates current versus previous return counts per SKU. It identifies significant increases—either large percentage spikes or unusually high absolute return volumes. This ensures early detection of potential product quality, packaging or fulfillment issues. When a return surge is detected, the workflow sends a structured alert to a Slack channel and stores the alert data in Airtable. This creates a searchable, historical log that supports investigations, trend analysis and operational decision-making. Who’s It For This workflow is ideal for: eCommerce operations teams. Quality assurance and product managers. Customer support leads. Supply chain and fulfillment teams. Store owners running WooCommerce at scale. Requirements to Use This Workflow To use this workflow, you will need: An active WooCommerce store with REST API access. WooCommerce API credentials** (Consumer Key & Secret). An active Slack workspace with permission to post messages. An Airtable base and table for logging alerts. An n8n instance (self-hosted or cloud). How It Works & How To Set Up Workflow Execution Flow Schedule Trigger runs the workflow at a fixed interval. Time Window node defines current and previous 24-hour comparison windows. HTTP Orders fetches recent WooCommerce orders. HTTP Refunds fetches refund records. Orders_Fetch (Code) maps refunds to parent orders and extracts SKU-level data. Refund_details (Code) aggregates returns, compares windows, and calculates increases. IF Node checks surge conditions: ≥100% increase OR ≥25 current returns Set Fields enriches data with status, run date, and cooldown key. Slack Node sends a formatted alert message. Code Node normalizes Slack output into structured fields. Airtable Node stores alert records for future reference. Setup Instructions Replace {your_woocommerce_domain} with your actual store domain. Verify WooCommerce API permissions allow order and refund access. Select the correct Slack channel in the Slack node. Ensure Airtable column names match the workflow mappings. How To Customize Nodes You can easily adapt this workflow by: Changing the schedule frequency in the Schedule Trigger. Adjusting WINDOW_HOURS in the Code nodes. Modifying alert thresholds in the IF node. Customizing the Slack message format. Adding or removing Airtable fields for reporting needs. Add-ons (Optional Enhancements) This workflow can be extended with: Email or Microsoft Teams notifications. Jira or Linear ticket creation. Product auto-pause for extreme return spikes. Dashboard reporting using BI tools. Cooldown logic to prevent repeated alerts per SKU. Use Case Examples Common use cases include: Detecting defective product batches early. Identifying packaging or shipping damage trends. Monitoring supplier quality issues. Supporting refund root-cause analysis. Improving customer satisfaction metrics. There can be many more operational and analytical use cases based on your business needs. Troubleshooting Guide | Issue | Possible Cause | Solution | |------|---------------|----------| | No Slack alerts | Threshold not met | Lower IF condition limits | | Empty SKU values | Missing SKU in WooCommerce | Use product name or ID fallback | | No data in Airtable | Column mismatch | Verify field names and types | | API errors | Invalid credentials | Re-authorize WooCommerce API | | Duplicate alerts | Frequent schedule | Add cooldown or deduplication logic | Need Help? Need assistance setting this up or customizing it for your business? WeblineIndia can help you implement, extend or build similar automation workflows tailored to your operational needs. Whether you want advanced alerting, deeper analytics or cross-system integrations, our team is ready to help you get the most out of n8n automation.
by Rajeet Nair
Overview This workflow automates financial reconciliation across multiple data sources such as bank statements, invoices, ERP systems, and CSV uploads. It standardizes all incoming data, performs rule-based matching, enhances results with AI-powered fuzzy matching, and assigns confidence scores. High-confidence matches are auto-reconciled, while uncertain ones are flagged for human review. How It Works Data Ingestion Receives financial data via webhook from different sources. Source Detection & Routing Identifies the data type and routes it to the correct normalization flow. Data Normalization Converts all records into a unified schema with consistent fields like ID, amount, date, and description. Data Merging Combines all normalized records into a single dataset for matching. Deterministic Matching Matches records using exact field combinations such as ID, amount, and date to generate initial confidence. Match Quality Check Filters low-confidence matches for further analysis. AI Fuzzy Matching Uses AI to identify near matches based on descriptions, amount tolerance, and date proximity. Confidence Scoring Combines deterministic and AI results into a final confidence score with a detailed audit trail. Decision Routing High confidence → auto-reconciled Low confidence → flagged for human review Reporting Logs reconciliation results into Google Sheets. Notifications Sends a summary report to Slack for visibility. Setup Instructions Configure webhook to receive financial data Set matching keys and confidence thresholds Connect OpenAI for fuzzy matching Connect Google Sheets for reporting Connect Slack for notifications Ensure input data follows expected formats Test with sample financial data Activate the workflow Use Cases Bank statement vs invoice reconciliation ERP vs accounting system matching Financial audit automation Detecting missing or duplicate transactions Reducing manual reconciliation effort Requirements n8n instance with webhook support OpenAI API access Google Sheets account Slack workspace Structured financial datasets (CSV/API) Notes Deterministic matching ensures accuracy for exact matches. AI fuzzy matching improves coverage for ambiguous records. Confidence scoring provides transparency and auditability. Human review ensures control over uncertain reconciliations.
by Andrey
Overview This n8n workflow automates brand monitoring across social media platforms (Reddit, LinkedIn, X, and Instagram) using the AnySite API. It searches posts mentioning your defined keywords, stores results in n8n Data Tables, analyzes engagement and sentiment, and generates a detailed AI-powered social media report automatically sent to your email. Key Features Multi-Platform Monitoring:** Reddit, LinkedIn, X (Twitter), and Instagram Automated Post Collection:** Searches for new posts containing tracked keywords Data Persistence:** Saves all posts and comments in structured Data Tables AI-Powered Reporting:** Uses GPT (OpenAI API) to summarize and analyze trends, engagement, and risks Automated Email Delivery:** Sends comprehensive daily/weekly reports via Gmail Comment Extraction:** Collects and formats post comments for deeper sentiment analysis Scheduling Support:** Can be executed manually or automatically (e.g., every night) How It Works Triggers The workflow runs: Automatically (via Schedule Trigger) — e.g., once daily Manually (via Manual Trigger) — for testing or on-demand analysis Data Collection Process Keyword Loading: Reads all keywords from the Data Table “Brand Monitoring Words” Social Media Search: For each keyword, the workflow calls the AnySite API endpoints: api/reddit/search/posts api/linkedin/search/posts api/twitter/search/posts (X) api/instagram/search/posts Deduplication: Before saving, checks if a post already exists in the “Brand Monitoring Posts” table. Data Storage: Inserts new posts into the Data Table with fields like type, title, url, vote_count, comment_count, etc. Comments Enrichment: For Reddit and LinkedIn, retrieves and formats comments into JSON strings, then updates the record. AI Analysis & Report Generation: The AI Agent (OpenAI GPT model) aggregates posts, analyzes sentiment, engagement, risks, and generates a structured HTML email report. Email Sending: Sends the final report via Gmail using your connected account. Setup Instructions Requirements Self-hosted or cloud n8n instance AnySite API key** – https://AnySite.io OpenAI API key** (GPT-4o or later) Connected Gmail account (for report delivery) Installation Steps Import the workflow Import the provided file: Social Media Monitoring.json Configure credentials AnySite API: Add access-token header with your API key OpenAI: Add your OpenAI API key in the “OpenAI Chat Model” node Gmail: Connect your Gmail account (OAuth2) in the “Send a message in Gmail” node Create required Data Tables 1️⃣ Brand Monitoring Words | Field | Type | Description | |-------|------|-------------| | word | string | Keyword or brand name to monitor | > Each row represents a single keyword to be tracked. 2️⃣ Brand Monitoring Posts | Field | Type | Description | |-------|------|-------------| | type | string | Platform type (e.g., reddit, linkedin, x, instagram) | | title | string | Post title or headline | | url | string | Direct link to post | | created_at | string | Post creation date/time | | subreddit_id | string | (Reddit only) subreddit ID | | subreddit_alias | string | (Reddit only) subreddit alias | | subreddit_url | string | (Reddit only) subreddit URL | | subreddit_description | string | (Reddit only) subreddit description | | comment_count | number | Number of comments | | vote_count | number | Votes, likes, or reactions count | | subreddit_member_count | number | (Reddit only) member count | | post_id | string | Unique post identifier | | text | string | Post body text | | comments | string | Serialized comments (JSON string) | | word | string | Matched keyword that triggered capture | AI Reporting Logic Collects all posts gathered during the run Aggregates by keyword and platform Evaluates sentiment, engagement, and risk signals Summarizes findings with an executive summary and key metrics Sends the Social Media Intelligence Report to your configured email Customization Options Schedule:** Adjust the trigger frequency (daily, hourly, etc.) Keywords:* Add or remove keywords in the *Brand Monitoring Words** table Report Depth:** Modify system prompts in the “AI Agent” node to customize tone and analysis focus Email Recipient:** Change the target email address in the “Send a message in Gmail” node Troubleshooting | Issue | Solution | |-------|-----------| | No posts found | Check AnySite API key and keyword relevance | | Duplicate posts | Verify Data Table deduplication setup | | Report not sent | Confirm Gmail OAuth2 connection | | AI Agent error | Ensure OpenAI API key and model selection are correct | Best Practices Use specific brand or product names in keywords for better precision Run the workflow daily to maintain fresh insights Periodically review and clean Data Tables Adjust AI prompt parameters to refine analytical tone Review AI-generated reports to ensure data quality Author Notes Created for automated cross-platform brand reputation monitoring, enabling real-time insights into how your brand is discussed online.
by Dr. Firas
💥 TikTok Viral Trend Detector → Seedance 2.0 → Auto-Publish with Blotato 📄 Documentation: Notion Guide Who is this for? This workflow is built for content creators, social media managers, and automation enthusiasts who want to automate the entire TikTok content pipeline — from trend research to video generation to publishing — without filming anything. It's especially powerful for anyone running faceless content channels on TikTok and Instagram, or agencies managing multiple accounts who need to produce trend-driven content at scale with zero manual effort. > ⚠️ Disclaimer: This workflow uses Community Nodes (Blotato). These are only available on self-hosted n8n instances. What problem is this workflow solving? Creating viral TikTok content requires three separate skills: trend research, video production, and publishing strategy. Most creators spend hours doing each manually — scrolling TikTok for inspiration, editing videos in CapCut, then posting with the right hashtags at the right time. This workflow automates the entire pipeline in one shot: It finds what's trending right now on TikTok in your niche Analyzes the viral pattern (hook structure, emotions, visual style) Generates an original concept inspired by that pattern (no copying) Creates a cinematic AI video with Seedance 2.0 Publishes automatically to TikTok and Instagram via Blotato What this workflow does This automation runs in two parts: Part 1 — Claude Cowork (Trend Intelligence) You prompt Claude Cowork with your niche (e.g., "weird productivity hacks"). It uses Apify to scrape the top viral TikTok videos on that topic, then performs a cross-video analysis to extract the viral recipe: Hook pattern (the first 3 seconds) Narrative structure (list format, before/after, tutorial…) Dominant emotion (curiosity, recognition, surprise) Recurring visual elements Claude then generates 3 original video concepts that apply this viral recipe — not copies, but fresh ideas built on proven patterns. The best concept is sent to the n8n webhook as a structured JSON payload. Part 2 — n8n Automation Pipeline Once the webhook receives the payload, n8n takes over: Webhook receives the trend analysis + selected concept from Claude Cowork Normalize Payload handles both Postman (array format) and direct Cowork formats Google Sheets archives the full trend report (analyzed videos, viral pattern, all 3 concepts) Claude (Anthropic node) generates a Seedance-optimized video prompt: one continuous cinematic scene, no human faces, 9:16 vertical, plus a TikTok caption with hashtags AtlasCloud / Seedance 2.0 generates the AI video from the text prompt (720p, native audio) Polling loop checks every 5 seconds until the video is ready Google Sheets logs the production (video URL, prompt, caption, prediction ID) Respond Success immediately confirms to the webhook caller (before publishing, to avoid timeouts) Blotato publishes in parallel to TikTok and Instagram simultaneously Google Sheets updates the row with publication status and post URLs for both platforms Setup Required accounts: A Claude.ai Pro account with Claude Cowork enabled (for Part 1 — trend scraping and analysis) An Apify account to run the TikTok scraper actor (clockworks/tiktok-scraper). Apify handles the TikTok data extraction — without it, the Cowork agent cannot retrieve viral videos. Credentials to configure in n8n: Google Sheets OAuth2 Type: OAuth2 Used for: Saving trend reports and production logs Connect via n8n's built-in Google Sheets credentials Anthropic API Key Type: Anthropic credentials (built-in n8n node) Get it at: console.anthropic.com Used for: Claude node — generating the Seedance video prompt and TikTok caption Atlas Cloud API Key Type: Header Auth (Authorization: Bearer) Get it at: AtlasCloud Used for: Seedance 2.0 text-to-video generation (HTTP Request node) Blotato API Key Type: Blotato credentials (community node) Get it at: Blotato Used for: Publishing to TikTok and Instagram Install the community node: npm install @blotato/n8n-nodes-blotato How to customize this workflow to your needs Change the niche: Edit the Claude Cowork prompt to target a different niche: cooking hacks, finance tips, fitness routines, language learning — any niche with active TikTok communities works. Change the video model: The workflow uses bytedance/seedance-2.0/text-to-video via AtlasCloud. You can swap it for seedance-2.0-fast (cheaper, 4s max) or any other AtlasCloud-supported model by editing the Seedance - Start Generation HTTP Request node body. Add more platforms: Since Blotato supports 9 platforms (YouTube Shorts, LinkedIn, Pinterest, Threads, etc.), duplicate the Create post node and change the platform parameter to publish everywhere at once. Change publishing schedule: Replace the Webhook trigger with a Schedule Trigger to run the full pipeline automatically every day at a set time. Select a different concept: Claude Cowork generates 3 video concepts per run. Change selected_concept_id in the JSON payload (1, 2, or 3) to test different concepts on the same trend data. Adjust video duration: The workflow currently uses duration: -1 (auto) for the Seedance node. Set it to 4 for faster/cheaper generation, or test higher values depending on your AtlasCloud plan limits. 🎥 Full tutorial: Watch on YouTube 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: n8n courses (in French) Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 n8n courses (in French)
by Takumi Oku
Who’s it for This template is designed for Print-on-Demand (POD) business owners, independent artists, and e-commerce managers who want to automate the process of turning raw design files into listed products without manual data entry. How it works This workflow acts as an automated merchandise factory that handles everything from image processing to marketing. Trigger: The workflow starts when a new design file is uploaded to a specific Google Drive folder. Analyze: OpenAI Vision analyzes the image to determine the subject, mood, and color palette, and assesses copyright risk. Process: The image background is removed using Remove.bg, and the clean asset is uploaded to Cloudinary. Mockup: The workflow generates realistic product mockups (e.g., T-shirts, Tote bags) by overlaying the design onto base product images using Cloudinary transformations. Copywriting: OpenAI writes an SEO-friendly product title, description, and tags based on the visual analysis. Draft: A draft product is created in Shopify with the generated details and mockup image. Approval: A message is sent to Slack with the product details and mockup. The workflow pauses and waits for a human to click "Approve" or "Reject". Publish & Promote: If approved, the product is published to Shopify and automatically posted to Instagram and Pinterest. If rejected, a notification is sent to Slack. How to set up Base Images: Upload your blank product images (e.g., a white t-shirt, a tote bag) to your Cloudinary account and note their Public IDs. Configuration: Open the Workflow Configuration node and fill in all the required fields, including your API keys and the Cloudinary Public IDs for your base products. Credentials: Configure the credentials for Google Drive, OpenAI, Shopify, Slack, Instagram, and Pinterest in their respective nodes. Folder ID: Update the Google Drive Trigger node with the ID of the folder you want to watch. Requirements n8n (Self-hosted or Cloud) Google Drive account OpenAI API key (Access to GPT-4o model recommended for Vision capabilities) Remove.bg API key Cloudinary account Shopify store Slack workspace Instagram Business account Pinterest account How to customize Mockups: You can modify the Code - Generate Mockup URLs node to add more product types (e.g., Hoodies, Mugs) by adding their Cloudinary Public IDs. Prompt Engineering: Adjust the system prompt in the OpenAI - SEO Copywriting node to match your brand voice or language style. Social Channels: Add or remove nodes to support other platforms like Twitter (X) or Facebook Pages.
by PDF Vector
Overview Transform your accounts payable department with this enterprise-grade invoice processing solution. This workflow automates the entire invoice lifecycle - from document ingestion through payment processing. It handles invoices from multiple sources (Google Drive, email attachments, API submissions), extracts data using AI, validates against purchase orders, routes for appropriate approvals based on amount thresholds, and integrates seamlessly with your ERP system. The solution includes vendor master data management, duplicate invoice detection, real-time spend analytics, and complete audit trails for compliance. What You Can Do This comprehensive workflow creates an intelligent invoice processing pipeline that monitors multiple input channels (Google Drive, email, webhooks) for new invoices and automatically extracts data from PDFs, images, and scanned documents using AI. It validates vendor information against your master database, matches invoices to purchase orders, and detects discrepancies. The workflow implements multi-level approval routing based on invoice amount and department, prevents duplicate payments through intelligent matching algorithms, and integrates with QuickBooks, SAP, or other ERP systems. Additionally, it generates real-time dashboards showing processing metrics and cash flow insights while sending automated reminders for pending approvals. Who It's For Perfect for medium to large businesses, accounting departments, and financial service providers processing more than 100 invoices monthly across multiple vendors. Ideal for organizations that need to enforce approval hierarchies and spending limits, require integration with existing ERP/accounting systems, want to reduce processing time from days to minutes, need audit trails and compliance reporting, and seek to eliminate manual data entry errors and duplicate payments. The Problem It Solves Manual invoice processing creates significant operational challenges including data entry errors (3-5% error rate), processing delays (8-10 days per invoice), duplicate payments (0.1-0.5% of invoices), approval bottlenecks causing late fees, lack of visibility into pending invoices and cash commitments, and compliance issues from missing audit trails. This workflow reduces processing time by 80%, eliminates data entry errors, prevents duplicate payments, and provides complete visibility into your payables process. Setup Instructions Google Drive Setup: Create dedicated folders for invoice intake and configure access permissions PDF Vector Configuration: Set up API credentials with appropriate rate limits for your volume Database Setup: Deploy the provided schema for vendor master and invoice tracking tables Email Integration: Configure IMAP credentials for invoice email monitoring (optional) ERP Connection: Set up API access to your accounting system (QuickBooks, SAP, etc.) Approval Rules: Define approval thresholds and routing rules in the configuration node Notification Setup: Configure Slack/email for approval notifications and alerts Key Features Multi-Channel Invoice Ingestion**: Automatically collect invoices from Google Drive, email attachments, and API uploads Advanced OCR and AI Extraction**: Process any invoice format including handwritten notes and poor quality scans Vendor Master Integration**: Validate and enrich vendor data, maintaining a clean vendor database 3-Way Matching**: Automatically match invoices to purchase orders and goods receipts Dynamic Approval Routing**: Route based on amount, department, vendor, or custom rules Duplicate Detection**: Prevent duplicate payments using fuzzy matching algorithms Real-Time Analytics**: Track KPIs like processing time, approval delays, and early payment discounts Exception Handling**: Intelligent routing of problematic invoices for manual review Audit Trail**: Complete tracking of all actions, approvals, and system modifications Payment Scheduling**: Optimize payment timing to capture discounts and manage cash flow Customization Options This workflow can be customized to add industry-specific extraction fields, implement GL coding rules based on vendor or amount, create department-specific approval workflows, add currency conversion for international invoices, integrate with additional systems (banks, expense management), configure custom dashboards and reporting, set up vendor portals for invoice status inquiries, and implement machine learning for automatic GL coding suggestions. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by Michael Gullo
Workflow Purpose The workflow is designed to scan submitted URLs using urlscan.io and VirusTotal, combine the results into a single structured summary, and send the report via Telegram. I built this workflow for people who primarily work from their phones and receive a constant stream of emails throughout the day. If a user gets an email asking them to sign a document, review a report, or take any action where the link looks suspicious, they can simply open the Telegram bot and quickly check whether the URL is safe before clicking it. Key Components 1. Input / Trigger Accepts URLs that need to be checked. Initiates requests to VirusTotal and urlscan.io. 2. VirusTotal Scan Always returns results if the URL is reachable. Provides reputation, malicious/clean flags, and scan metadata. 3. urlscan.io Scan Returns details on how the URL behaves when loaded (domains, requests, resources, etc.). Sometimes fails due to blocks or restrictions. 4. Error Handling with Code Node Checks whether urlscan.io responded successfully. Ensures the workflow always produces a summary, even if urlscan.io fails. 5. Summary Generation If both scans succeed → summarize combined findings from VirusTotal + urlscan.io. If urlscan.io fails → state clearly in the summary “urlscan.io scan was blocked/failed. Relying on VirusTotal results.” Ensures user still gets a complete security report. 6. Telegram Output Final formatted summary is delivered to a Telegram chat via the bot. Chat ID issue was fixed after the Code Node restructuring. Outcome The workflow now guarantees a consistent, user-friendly summary regardless of urlscan.io failures. It leverages VirusTotal as the fallback source of truth. The Telegram bot provides real-time alerts with clear indications of scan success/failure. Prequisites Telegram In Telegram, start a chat with @BotFather. Send /newbot, pick a name and a unique username. Copy the HTTP API token BotFather returns (store securely) Start a DM with your bot and send any message. Call getUpdates and read the chat.id urlscan.io Create/log into your urlscan.io account. Go to Settings & API → New API key and generate a key. (Recommended) In Settings & API, set Default Scan Visibility to Unlisted to avoid exposing PII in public scans. Save the key securely (env var or n8n Credentials). Rate limits note: urlscan.io enforces per-minute/hour/day quotas; exceeding them returns HTTP 429. You can view your personal quotas on their dashboard/quotas endpoint Virustotal Sign up / sign in to VirusTotal Community. Open My API key (Profile menu) and copy your Public API key. Store it securely (env var or n8n Credentials). For a more reliable connection with VirusTotal and improved scanning results, enable the Header section in the node settings. Add a header parameter with a clear name (e.g., x-apikey), and then paste your API key into the Value field. Rate limits (Public API): 4 requests/minute, 500/day; not for commercial workflows. Consider Premium if you’ll exceed this. How to Customize the Workflow This workflow is designed to be highly customizable, allowing users to adapt it to their specific needs and use cases. For example, additional malicious website scanners can be integrated through HTTP Request nodes. To make this work, the user simply needs to update the Merge node so that all information flows correctly through the workflow. In addition, users can connect either Gmail or Outlook nodes to automatically test URLs, binary attachments, and other types of information received via email—helping them evaluate data before opening it. Users can also customize how they receive reports. For instance, results can be sent through Telegram (as in the default setup), Slack, Microsoft Teams, or even saved to Google Drive or a Google Sheet for recordkeeping and audit purposes. For consulting and support, or if you have questions, please feel free to connect with me on Linkedin or via email.
by David Olusola
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. WordPress to Blotato Social Publisher Overview: This automation monitors your WordPress site for new posts and automatically creates platform-specific social media content using AI, then posts to Twitter, LinkedIn, and Facebook via Blotato. What it does: Monitors WordPress site for new posts every 30 minutes Filters posts published in the last hour to avoid duplicates Processes each new post individually AI generates optimized content for each social platform (Twitter, LinkedIn, Facebook) Extracts platform-specific content from AI response Publishes to all three social media platforms via Blotato API Setup Required: WordPress Connection Configure WordPress credentials in the "Check New Posts" node Enter your WordPress site URL, username, and password/app password Blotato Social Media API Setup Get your Blotato API key from your Blotato account Configure API credentials in the Blotato connection node Map each platform (Twitter, LinkedIn, Facebook) to the correct Blotato channel AI Configuration Set up Google Gemini API credentials Connect the Gemini model to the "AI Social Content Creator" node Customization Options Posting Frequency: Modify schedule trigger (default: every 30 minutes) Content Tone: Adjust AI system message for different writing styles Post Filtering: Change time window in WordPress node (default: last hour) Platform Selection: Remove any social media platforms you don’t want to use Testing Run workflow manually to test connections Verify posts appear correctly on all platforms Monitor for API rate limit issues Features: Platform-optimized content (hashtags, character limits, professional tone) Duplicate prevention system Batch processing for multiple posts Featured image support Customizable posting frequency Customization: Change monitoring frequency Adjust AI prompts for different tones Add/remove social platforms Modify hashtag strategies Need Help? For n8n coaching or one-on-one consultation