by Nik B.
Automatically fetches daily sales, shifts, and receipts from Loyverse. Calculates gross profit, net operating profit, other key metrics, saves them to a Google Sheet and sends out a daily report via email. Who’s it for This template is for any business owner, manager, or analyst using Loyverse POS who needs more advanced financial reporting. If you're a restaurant, bar, or retail owner who wants to automatically track daily net profit, compare sales to historical averages, and build a custom financial dashboard in Google Sheets, this workflow is for you. How it works / What it does This workflow runs automatically on a daily schedule. It fetches all sales data and receipts from your Loyverse account for the previous business day, defined by your custom shift times (even past midnight). A powerful Code node then processes all the data to calculate the metrics that Loyverse either doesn't provide at all, or only spreads out across several separate reports instead of in one consolidated place. Already set up are metrics like... -Total Revenue, Gross Profit, and Net Operating Profit Cash handling differences (over/under) Average spend per receipt (ATV) 30-day rolling Net Operating Profit (NOP) Performance vs. your historical weekday average Finally, it appends the single, calculated row of daily metrics to a Google Sheet and sends an easily customizable summary report to your email. How to set up This workflow includes detailed Sticky Notes to guide you through the setup process. Because every business has a unique POS configuration (different POS devices, categories, and payment types), you'll need to set up a few things manually before executing the workflow. I've tried to make this as easy as possible to follow, and the entire setup should only take about 15 minutes. Preparations & Credential setup Subscribe to "Integrations" Add-on in Loyverse ($9 / month) to gain API access. Create an Access token in Loyverse Create Credentials: In your n8n instance, create credentials for Loyverse (use "Generic" > "Bearer Auth"), Google Sheets (OAuth2), and your Email (SMTP or other). Make a copy of a prep-configured Google Spreadsheet (Link in the second sticky note inside the workflow). Fill MASTER CONFIG: Open the MASTER CONFIG node. Follow the comments inside to add your Google Sheet ID, Sheet Names, business hours, timezone, and Loyverse IDs (for POS devices, payment types, and categories). Configure Google Sheet Nodes Configure Read Historical Data: Open this node. Follow the instructions in the nearby Sticky Note to paste the expressions for your Document ID and Sheet Name. Configure Save Product List: Open this node. Paste in the expressions for Document ID and Sheet Name. The column mapper will load; map your sheet columns (e.g., item_name) to the data on the left (e.g., {{ $json.item_name }}). Configure Save Latest Sales Data: Open this node. Paste in the expressions for Document ID and Sheet Name. Save and run the workflow. After that, the column mapper will load. This is the most important step: map your sheet's column names (e.g., "Total Revenue") to the calculated metrics from the Calculate All Metrics node (e.g., {{ $json.totalGrossRevenue }}). Activate the workflow. 🫡 Requirements Loyverse Integrations Subscription Loyverse Access Token Credentials for Loyverse (Bearer Auth) Credentials for Google Sheets (OAuth2) Credentials for Email/SMTP sender How to customize the workflow This template is designed to be highly flexible. Central Configuration: Almost all customization (POS devices, categories, payment types, sheet names) is done in the MASTER CONFIG node. You don't need to dig through other nodes. Add/Remove Metrics: The Calculate All Metrics node has additional metrics already set up, just add the relevant collumns to the SalesData sheet or even add your own calculations to the node. Any new metric you add (e.g., metrics.myNewMetric = 123) will be available to map in the Save Latest Sales Data node. Email Body: You can easily edit the Send email node to change the text or add new metrics from the Calculate All Metrics node.
by Amit Kumar
Overview This n8n template automates the entire process of generating short-form AI videos and publishing them across multiple social media platforms. It combines Google Gemini for structured prompt creation, KIE AI for video generation, and Blotato for centralized publishing. The result is a fully automated content pipeline ideal for creators, marketers, agencies, or anyone who wants consistent, hands-free content generation. This workflow is especially useful for short-video creators, meme pages, educational creators, UGC teams, auto-posting accounts, and brands who want to maintain high-frequency posting without manual effort. Good to Know API costs:** KIE AI generates videos using paid tokens/credits. Prices vary based on model, duration, and resolution (check KIE AI pricing). Google Gemini model restrictions:** Certain Gemini models are geo-limited. If you receive “model not found,” the model may not be available in your region. Blotato publishing:** Blotato supports many platforms: YouTube, Instagram, Facebook, LinkedIn, TikTok, X, Bluesky, and more. Platform availability depends on your Blotato setup. Runtime considerations:** Video generation can take time (10–60 seconds+, depending on the complexity). Self-hosted requirement:** This workflow uses a community node (Blotato). Community nodes do not run on n8n Cloud. A self-hosted instance is required. How It Works Scheduler Trigger Defines how frequently new videos should be created (e.g., every 12 hours). Random Template Selector A JavaScript node generates a random number to choose from multiple creative prompt templates. AI Agent (Google Gemini) Gemini generates a JSON object containing: A short title A human-readable video description A detailed text-to-video prompt The Structured Output Parser ensures strict JSON shape. Video Generation with KIE AI The prompt is sent to KIE AI’s video generation API. KIE AI creates a synthetic AI video based on the description and your chosen parameters (aspect ratio, frames, watermark removal, etc.). Polling & Retrieval The workflow waits until the video is fully rendered, then fetches the final video URL. Media Upload to Blotato The generated video is uploaded into Blotato’s media storage for publishing. Automatic Posting to Social Platforms Blotato distributes the video to all connected platforms. Examples include: YouTube Instagram Facebook LinkedIn Bluesky TikTok X Any platform supported by your Blotato account This results in a fully automated “idea → video → upload → publish” pipeline. How to Use Start by testing the workflow manually to verify video generation and posting. Adjust the Scheduler Trigger to fit your posting frequency. Add your API credentials for: Google Gemini KIE AI Blotato Ensure your Blotato account has social channels connected. Edit or expand the prompt templates for your content niche: Comedy clips Educational videos Product demos Storytelling Pet videos Motivational content The more template prompts you add, the more diverse your automated videos will be. Requirements Google Gemini** API Key Used for generating structured titles, descriptions, and video prompts. KIE AI API key** Required for creating the actual AI-generated video. Blotato account** Required for uploading media and automatically posting to platforms. Self-hosted n8n instance** Needed because Blotato uses a community node, which n8n Cloud does not support. Limitations KIE AI models may output inconsistent results if prompts are vague. High-frequency scheduling may consume API credits quickly. Some platforms (e.g., TikTok or Facebook Pages) may require additional permissions or account linking steps in Blotato. Video rendering time varies depending on prompt complexity. Customization Ideas Add more prompt templates to increase variety. Swap Gemini for an LLM of your choice (OpenAI, Claude, etc.). Add a Telegram, Discord, or Slack notification once posting is complete. Store all generated titles, descriptions, and video URLs in: Google Sheets Notion Airtable Supabase Add multi-language support using a translation node. Add an approval step where videos go to your team before publishing. Add analytics logging (impressions, views, etc.) using Blotato or another service. Troubleshooting Video not generating?** Check if your KIE AI model accepts your chosen parameters. Model not found?** Switch to a supported Gemini model for your region. Publishing fails?** Ensure Blotato platform accounts are authenticated. Workflow stops early?** Increase the wait timeout before polling KIE AI. This template is designed for easy setup and high flexibility. All technical details, configuration steps, and workflow logic are already included in sticky notes inside the workflow. Once configured, this pipeline becomes a hands-free AI-powered content engine capable of generating and publishing content at scale.
by Kevin Meneses
What this workflow does This workflow automatically monitors eBay Deals and sends Telegram alerts when relevant, high-quality deals are detected. It combines: Web scraping with Decodo** JavaScript pre-processing (no raw HTML sent to the LLM)** AI-based product classification and deal scoring** Rule-based filtering using price and score** Only valuable deals reach the final notification. How it works (overview) The workflow runs manually or on a schedule. The eBay Deals page is scraped using Decodo, which handles proxies and anti-bot protections. Decodo – Web Scraper for n8n JavaScript extracts only key product data (ID, title, price, URL, image). An AI Agent classifies each product and assigns a deal quality score (0–10). Price and score rules are applied. Matching deals are sent to Telegram. How to configure it 1. Decodo Add your Decodo API credentials to the Decodo node. Optionally change the target eBay URL. 2. AI Agent Add your LLM credentials (e.g. Google Gemini). No HTML is sent to the model — only compact, structured data. 3. Telegram Add your Telegram Bot Token. Set your chat_id in the Telegram node. Customize the alert message if needed. 4. Filtering rules Adjust price limits and minimum deal score in the IF node
by vinci-king-01
Product Price Monitor with Mailchimp and Baserow ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow scrapes multiple e-commerce sites for product pricing data, stores the historical prices in Baserow, analyzes weekly trends, and emails a neatly formatted seasonal report to your Mailchimp audience. It is designed for retailers who need to stay on top of seasonal pricing patterns to make informed inventory and pricing decisions. Pre-conditions/Requirements Prerequisites Running n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed Mailchimp account with at least one audience list Baserow workspace with edit rights Product URLs or SKU list from target e-commerce platforms Required Credentials | Credential | Used By | Scope | |------------|---------|-------| | ScrapeGraphAI API Key | ScrapeGraphAI node | Web scraping | | Mailchimp API Key & Server Prefix | Mailchimp node | Sending emails | | Baserow API Token | Baserow node | Reading & writing records | Baserow Table Setup Create a table named price_tracker with the following fields: | Field Name | Type | Example | |------------|------|---------| | product_name | Text | “Winter Jacket” | | product_url | URL | https://example.com/winter-jacket | | current_price | Number | 59.99 | | scrape_date | DateTime | 2023-11-15T08:21:00Z | How it works This workflow scrapes multiple e-commerce sites for product pricing data, stores the historical prices in Baserow, analyzes weekly trends, and emails a neatly formatted seasonal report to your Mailchimp audience. It is designed for retailers who need to stay on top of seasonal pricing patterns to make informed inventory and pricing decisions. Key Steps: Schedule Trigger**: Fires every week (or custom CRON) to start the monitoring cycle. Code (Prepare URLs)**: Loads or constructs the list of product URLs to monitor. SplitInBatches**: Processes product URLs in manageable batches to avoid rate-limit issues. ScrapeGraphAI**: Scrapes each product page and extracts the current price and name. If (Price Found?)**: Continues only if scraping returns a valid price. Baserow**: Upserts the scraped data into the price_tracker table. Code (Trend Analysis)**: Aggregates weekly data to detect price increases, decreases, or stable trends. Set (Mail Content)**: Formats the trend summary into an HTML email body. Mailchimp**: Sends the seasonal price-trend report to the selected audience segment. Sticky Note**: Documentation node explaining business logic in-workflow. Set up steps Setup Time: 10-15 minutes Clone the template: Import the workflow JSON into your n8n instance. Install ScrapeGraphAI: n8n-nodes-scrapegraphai via the Community Nodes panel. Add credentials: a. ScrapeGraphAI API Key b. Mailchimp API Key & Server Prefix c. Baserow API Token Configure Baserow node: Point it to your price_tracker table. Edit product list: In the “Prepare URLs” Code node, replace the sample URLs with your own. Adjust schedule: Modify the Schedule Trigger CRON expression if weekly isn’t suitable. Test run: Execute the workflow manually once to verify credentials and data flow. Activate: Turn on the workflow for automatic weekly monitoring. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Initiates the workflow on a weekly CRON schedule. Code (Prepare URLs)** – Generates an array of product URLs/SKUs to scrape. SplitInBatches** – Splits the array into chunks of 5 URLs to stay within request limits. ScrapeGraphAI** – Scrapes each URL, using XPath/CSS selectors to pull price & title. If (Price Found?)** – Filters out failed or empty scrape results. Baserow** – Inserts or updates the price record in the database. Code (Trend Analysis)** – Calculates week-over-week price changes and flags anomalies. Set (Mail Content)** – Creates an HTML table with product, current price, and trend arrow. Mailchimp** – Sends or schedules the email campaign. Sticky Note** – Provides inline documentation and edit hints. Data Flow: Schedule Trigger → Code (Prepare URLs) → SplitInBatches SplitInBatches → ScrapeGraphAI → If (Price Found?) → Baserow Baserow → Code (Trend Analysis) → Set (Mail Content) → Mailchimp Customization Examples Change scraping frequency // Schedule Trigger CRON for daily at 07:00 UTC 0 7 * * * Add competitor comparison column // Code (Trend Analysis) item.competitor_price_diff = item.current_price - item.competitor_price; return item; Data Output Format The workflow outputs structured JSON data: { "product_name": "Winter Jacket", "product_url": "https://example.com/winter-jacket", "current_price": 59.99, "scrape_date": "2023-11-15T08:21:00Z", "weekly_trend": "decrease" } Troubleshooting Common Issues Invalid ScrapeGraphAI key – Verify the API key and ensure your subscription is active. Mailchimp “Invalid Audience” error – Double-check the audience ID and that the API key has correct permissions. Baserow “Field mismatch” – Confirm your table fields match the names/types in the workflow. Performance Tips Limit each SplitInBatches run to ≤10 URLs to reduce scraping timeouts. Enable caching in ScrapeGraphAI to avoid repeated requests to the same URL within short intervals. Pro Tips: Use environment variables for all API keys to avoid hard-coding secrets. Add an extra If node to alert you if a product’s price drops below a target threshold. Combine with n8n’s Slack node for real-time alerts in addition to Mailchimp summaries.
by vinci-king-01
Medical Research Tracker with Matrix and Pipedrive ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically monitors selected government and healthcare-policy websites, extracts newly published or updated policy documents, logs them as deals in a Pipedrive pipeline, and announces critical changes in a Matrix room. It gives healthcare administrators and policy analysts a near real-time view of policy developments without manual web checks. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed Active Pipedrive account with at least one pipeline Matrix account & accessible room for notifications Basic knowledge of n8n credential setup Required Credentials ScrapeGraphAI API Key** – Enables the scraping engine Pipedrive OAuth2 / API Token** – Creates & updates deals Matrix Credentials** – Homeserver URL, user, access token (or password) Specific Setup Requirements | Variable | Description | Example | |----------|-------------|---------| | POLICY_SITES | Comma-separated list of URLs to scrape | https://health.gov/policies,https://who.int/proposals | | PD_PIPELINE_ID | Pipedrive pipeline where deals are created | 5 | | PD_STAGE_ID_ALERT | Stage ID for “Review Needed” | 17 | | MATRIX_ROOM_ID | Room to send alerts (incl. leading !) | !policy:matrix.org | Edit the initial Set node to provide these values before running. How it works This workflow automatically monitors selected government and healthcare-policy websites, extracts newly published or updated policy documents, logs them as deals in a Pipedrive pipeline, and announces critical changes in a Matrix room. It gives healthcare administrators and policy analysts a near real-time view of policy developments without manual web checks. Key Steps: Scheduled Trigger**: Runs every 6 hours (configurable) to start the monitoring cycle. Code (URL List Builder)**: Generates an array from POLICY_SITES for downstream batching. SplitInBatches**: Iterates through each policy URL individually. ScrapeGraphAI**: Scrapes page titles, publication dates, and summary paragraphs. If (New vs Existing)**: Compares scraped hash with last run; continues only for fresh content. Merge (Aggregate Results)**: Collects all “new” policies into a single payload. Set (Deal Formatter)**: Maps scraped data to Pipedrive deal fields. Pipedrive Node**: Creates or updates a deal per policy item. Matrix Node**: Posts a formatted alert message in the specified Matrix room. Set up steps Setup Time: 15-20 minutes Install Community Node – In n8n, go to Settings → Community Nodes → Install and search for ScrapeGraphAI. Add Credentials – Create New credentials for ScrapeGraphAI, Pipedrive, and Matrix under Credentials. Configure Environment Variables – Open the Set (Initial Config) node and replace placeholders (POLICY_SITES, PD_PIPELINE_ID, etc.) with your values. Review Schedule – Double-click the Schedule Trigger node to adjust the interval if needed. Activate Workflow – Click Activate. The workflow will run at the next scheduled interval. Verify Outputs – Check Pipedrive for new deals and the Matrix room for alert messages after the first run. Node Descriptions Core Workflow Nodes: stickyNote** – Provides an at-a-glance description of the workflow logic directly on the canvas. scheduleTrigger** – Fires the workflow periodically (default 6 hours). code (URL List Builder)** – Splits the POLICY_SITES variable into an array. splitInBatches** – Ensures each URL is processed individually to avoid timeouts. scrapegraphAi** – Parses HTML and extracts policy metadata using XPath/CSS selectors. if (New vs Existing)** – Uses hashing to ignore unchanged pages. merge** – Combines all new items so they can be processed in bulk. set (Deal Formatter)** – Maps scraped fields to Pipedrive deal properties. matrix** – Sends formatted messages to a Matrix room for team visibility. pipedrive** – Creates or updates deals representing each policy update. Data Flow: scheduleTrigger → code → splitInBatches → scrapegraphAi → if → merge → set → pipedrive → matrix Customization Examples 1. Add another data field (e.g., policy author) // Inside ScrapeGraphAI node → Selectors { "title": "//h1/text()", "date": "//time/@datetime", "summary": "//p[1]/text()", "author": "//span[@class='author']/text()" // new line } 2. Switch notifications from Matrix to Email // Replace Matrix node with “Send Email” { "to": "policy-team@example.com", "subject": "New Healthcare Policy Detected: {{$json.title}}", "text": "Summary:\n{{$json.summary}}\n\nRead more at {{$json.url}}" } Data Output Format The workflow outputs structured JSON data for each new policy article: { "title": "Affordable Care Expansion Act – 2024", "url": "https://health.gov/policies/acea-2024", "date": "2024-06-14T09:00:00Z", "summary": "Proposes expansion of coverage to rural areas...", "source": "health.gov", "hash": "2d6f1c8e3b..." } Troubleshooting Common Issues ScrapeGraphAI returns empty objects – Verify selectors match the current HTML structure; inspect the site with developer tools and update the node configuration. Duplicate deals appear in Pipedrive – Ensure the “Find or Create” option is enabled in the Pipedrive node, using the page hash or url as a unique key. Performance Tips Limit POLICY_SITES to under 50 URLs per run to avoid hitting rate limits. Increase Schedule Trigger interval if you notice ScrapeGraphAI rate-limiting. Pro Tips: Store historical scraped data in a database node for long-term audit trails. Use the n8n Workflow Executions page to replay failed runs without waiting for the next schedule. Add an Error Trigger node to emit alerts if scraping or API calls fail.
by Colton Randolph
This n8n workflow automatically scrapes TechCrunch articles, filters for AI-related content using OpenAI, and delivers curated summaries to your Slack channels. Perfect for individuals or teams who need to stay current on artificial intelligence developments without manually browsing tech news sites. Who's it for AI product teams tracking industry developments and competitive moves Tech investors monitoring AI startup coverage and funding announcements Marketing teams following AI trends for content and positioning strategies Executives needing daily AI industry briefings without manual research overhead Development teams staying current on AI tools, frameworks, and breakthrough technologies How it works The workflow runs on a daily schedule, crawling a specificed amount of TechCrunch articles from the current year. Firecrawl extracts clean markdown content while bypassing anti-bot measures and handling JavaScript rendering automatically. Each article gets analyzed by an AI research assistant that determines if the content relates to artificial intelligence, machine learning, AI companies, or AI technology. Articles marked as "NOT_AI_RELATED" get filtered out automatically. For AI-relevant articles, OpenAI generates focused 3-bullet-point summaries that capture key insights. These summaries get delivered to your specified Slack channel with the original TechCrunch article title and source link for deeper reading. How to set up Configure Firecrawl: Add your Firecrawl API key to the HTTP Request node Set OpenAI credentials: Add your OpenAI API key to the AI Agent node Connect Slack: Configure your Slack webhook URL and target channel Adjust scheduling: Set your preferred trigger frequency (daily recommended) Test the workflow: Run manually to verify article extraction and Slack delivery Requirements Firecrawl account** with API access for TechCrunch web scraping OpenAI API key** for AI content analysis and summarization Slack workspace** with webhook permissions for message delivery n8n instance** (cloud or self-hosted) for workflow execution How to customize the workflow Source expansion: Modify the HTTP node URL to target additional tech publications beyond TechCrunch, or adjust the article limit and date filtering for different coverage needs. AI focus refinement: Update the OpenAI prompt to focus on specific AI verticals like generative AI, robotics, or ML infrastructure. Add company names or technology terms to the relevance filtering logic. Summary formats: Change from 3-bullet summaries to executive briefs, technical analyses, or competitive intelligence reports by modifying the OpenAI summarization prompt. Multi-channel delivery: Extend beyond Slack to email notifications, Microsoft Teams, or database storage for historical trend analysis and executive dashboards.
by Yang
🛍️ Pick Best-Value Products from Any Website Using Dumpling AI, GPT-4o, and Google Sheets Who’s it for This workflow is for eCommerce researchers, affiliate marketers, and anyone who needs to compare product listings across sites like Amazon. It’s perfect for quickly identifying top product picks based on delivery speed, free shipping, and price. What it does Just submit a product listing URL. The workflow will crawl it using Dumpling AI, take screenshots of the pages, and pass them to GPT-4o to extract up to 3 best-value picks. It analyzes screenshots visually—no HTML scraping needed. Each result includes: product name price review count free delivery date (if available) How it works 📝 Receives a URL through a web form 🧠 Uses Dumpling AI to crawl the website 📸 Takes screenshots of each product listing 🔍 GPT-4o analyzes each image to pick top products 🔧 A code node parses and flattens the output 📊 Google Sheets stores the result 📧 Sends the spreadsheet link via email Requirements Dumpling AI token** OpenAI key** (GPT-4o) Google Sheet** with columns: product name, price, reviews no., free_delivery_date > You can customize the AI prompt to extract other visual insights (e.g., ratings, specs).
by Rohit Dabra
Jira MCP Server Integration with n8n Overview Transform your Jira project management with the power of AI and automation! This n8n workflow template demonstrates how to create a seamless integration between chat interfaces, AI processing, and Jira Software using MCP (Model Context Protocol) server architecture. What This Workflow Does Chat-Driven Automation**: Trigger Jira operations through simple chat messages AI-Powered Issue Creation**: Automatically generate detailed Jira issues with descriptions and acceptance criteria Complete Jira Management**: Get issue status, changelogs, comments, and perform full CRUD operations Memory Integration**: Maintain context across conversations for smarter automations Zero Manual Entry**: Eliminate repetitive data entry and human errors Key Features ✅ Natural Language Processing: Use Google Gemini to understand and process chat requests ✅ MCP Server Integration: Secure, efficient communication with Jira APIs ✅ Comprehensive Jira Operations: Create, read, update, delete issues and comments ✅ Smart Memory: Context-aware conversations for better automation ✅ Multi-Action Workflow: Handle multiple Jira operations from a single trigger Demo Video 🎥 Watch the Complete Demo: Automate Jira Issue Creation with n8n & AI | MCP Server Integration Prerequisites Before setting up this workflow, ensure you have: n8n instance** (cloud or self-hosted) Jira Software** account with appropriate permissions Google Gemini API** credentials MCP Server** configured and accessible Basic understanding of n8n workflows Setup Guide Step 1: Import the Workflow Copy the workflow JSON from this template In your n8n instance, click Import > From Text Paste the JSON and click Import Step 2: Configure Google Gemini Open the Google Gemini Chat Model node Add your Google Gemini API credentials Configure the model parameters: Model: gemini-pro (recommended) Temperature: 0.7 for balanced creativity Max tokens: As per your requirements Step 3: Set Up MCP Server Connection Configure the MCP Client node: Server URL: Your MCP server endpoint Authentication: Add required credentials Timeout: Set appropriate timeout values Ensure your MCP server supports Jira operations: Issue creation and retrieval Comment management Status updates Changelog access Step 4: Configure Jira Integration Set up Jira credentials in n8n: Go to Credentials > Add Credential Select Jira Software API Add your Jira instance URL, email, and API token Configure each Jira node: Get Issue Status: Set project key and filters Create Issue: Define issue type and required fields Manage Comments: Set permissions and content rules Step 5: Memory Configuration Configure the Simple Memory node: Set memory key for conversation context Define memory retention duration Configure memory scope (user/session level) Step 6: Chat Trigger Setup Configure the When Chat Message Received trigger: Set up webhook URL or chat platform integration Define message filters if needed Test the trigger with sample messages Usage Examples Creating a Jira Issue Chat Input: Can you create an issue in Jira for Login Page with detailed description and acceptance criteria? Expected Output: New Jira issue created with structured description Automatically generated acceptance criteria Proper labeling and categorization Getting Issue Status Chat Input: What's the status of issue PROJ-123? Expected Output: Current issue status Last updated information Assigned user details Managing Comments Chat Input: Add a comment to issue PROJ-123: "Ready for testing in staging environment" Expected Output: Comment added to specified issue Notification sent to relevant team members Customization Options Extending Jira Operations Add more Jira operations (transitions, watchers, attachments) Implement custom field handling Create multi-project workflows AI Enhancement Fine-tune Gemini prompts for better issue descriptions Add custom validation rules Implement approval workflows Integration Expansion Connect to Slack, Discord, or Teams Add email notifications Integrate with time tracking tools Troubleshooting Common Issues MCP Server Connection Failed Verify server URL and credentials Check network connectivity Ensure MCP server is running and accessible Jira API Errors Validate Jira credentials and permissions Check project access rights Verify issue type and field configurations AI Response Issues Review Gemini API quotas and limits Adjust prompt engineering for better results Check model parameters and settings Performance Tips Optimize memory usage for long conversations Implement rate limiting for API calls Use error handling and retry mechanisms Monitor workflow execution times Best Practices Security: Store all credentials securely using n8n's credential system Testing: Test each node individually before running the complete workflow Monitoring: Set up alerts for workflow failures and API limits Documentation: Keep track of custom configurations and modifications Backup: Regular backup of workflow configurations and credentials Happy Automating! 🚀 This workflow template is designed to boost productivity and eliminate manual Jira management tasks. Customize it according to your team's specific needs and processes.
by Robert Breen
This n8n workflow template automatically processes phone interview transcripts using AI to evaluate candidates against specific criteria and saves the results to Google Sheets. Perfect for HR departments, recruitment agencies, or any business conducting phone screenings. What This Workflow Does This automated workflow: Receives phone interview transcripts via webhook Uses OpenAI GPT models to analyze candidate responses against predefined qualification criteria Extracts key information (name, phone, location, qualification status) Automatically saves structured results to a Google Sheet for easy review and follow-up The workflow is specifically designed for driving job interviews but can be easily adapted for any position with custom evaluation criteria. Tools & Services Used N8N** - Workflow automation platform OpenAI API** - AI-powered transcript analysis (GPT-4o-mini) Google Sheets** - Data storage and management Webhook** - Receiving transcript data Prerequisites Before implementing this workflow, you'll need: N8N Instance - Self-hosted or cloud version OpenAI API Account - For AI transcript processing Google Account - For Google Sheets integration Phone Interview System - That can send webhooks (like Vapi.ai) Step-by-Step Setup Instructions Step 1: Set Up OpenAI API Access Visit OpenAI's API platform Create an account or log in Navigate to API Keys section Generate a new API key Copy and securely store your API key Step 2: Create Your Google Sheet Option 1: Use Our Pre-Made Template (Recommended) Copy our template: Driver Interview Results Template Click "File" → "Make a copy" to create your own version Rename it as desired Copy your new sheet's URL - you'll need this for the workflow Option 2: Create From Scratch Go to Google Sheets Create a new spreadsheet Name it "Driver Interview Results" (or your preferred name) Set up the following column headers in row 1: A1: name B1: phone C1: cityState D1: qualifies E1: reasoning Copy the Google Sheet URL - you'll need this for the workflow Step 3: Import and Configure the N8N Workflow Import the Workflow Copy the workflow JSON from the template In your N8N instance, go to Workflows → Import from JSON Paste the JSON and import Configure OpenAI Credentials Click on either "OpenAI Chat Model" node Set up credentials using your OpenAI API key Test the connection to ensure it works Configure Google Sheets Integration Click on the "Save to Google Sheets" node Set up Google Sheets OAuth2 credentials Select your spreadsheet from the dropdown Choose the correct sheet (usually "Sheet1") Update the Webhook Click on the "Webhook" node Note the webhook URL that n8n generates This URL will receive your transcript data Step 4: Customize Evaluation Criteria The workflow includes predefined criteria for a Massachusetts driving job. To customize for your needs: Click on the "Evaluate Candidate" node Modify the system message to include your specific requirements Update the evaluation criteria checklist Adjust the JSON output format if needed Current Evaluation Criteria: Valid Massachusetts driver's license No felony convictions Clean driving record (no recent tickets/accidents) Willing to complete background check Can pass drug test (including marijuana) Available full-time Monday-Friday Lives in Massachusetts Step 5: Connect to Vapi.ai (Phone Interview System) This workflow is specifically designed to work with Vapi.ai's phone interview system. Here's how to connect it: Setting Up the Vapi Integration Copy Your N8N Webhook URL In your n8n workflow, click on the "Webhook" node Copy the webhook URL (it should look like: https://your-n8n-instance.com/webhook-test/351ffe7c-69f2-4657-b593-c848d59205c0) Configure Your Vapi Assistant Log into your Vapi.ai dashboard Create or edit your phone interview assistant In the assistant settings, find the "Server" section Set the Server URL to your n8n webhook URL Set timeout to 20 seconds (as configured in the workflow) Configure Server Messages In your Vapi assistant settings, enable these server messages: end-of-call-report transcript[transcriptType="final"] Set Up the Interview Script Use the provided interview script in your Vapi assistant (found in the workflow's system message) This ensures consistent data collection for the AI evaluation Expected Data Format from Vapi The workflow expects Vapi to send data in this specific format: { "body": { "message": { "artifact": { "transcript": "AI: Hi. Are you interested in driving for Bank of Transport?\nUser: Yes.\nAI: Great. Before we go further..." } } } } Vapi Configuration Checklist ✅ Webhook URL set in Vapi assistant server settings ✅ Server messages enabled: end-of-call-report, transcript[transcriptType="final"] ✅ Interview script configured in assistant ✅ Assistant set to send webhooks on call completion Alternative Phone Systems If you're not using Vapi.ai, you can adapt this workflow for other phone systems by: Modifying the "Edit Fields2" node to extract transcripts from your system's data format Updating the webhook data structure expectations Ensuring your phone system sends the complete interview transcript Step 6: Test the Workflow Test with Sample Data Use the "Execute Workflow" button with test data Verify that data appears correctly in your Google Sheet Check that the AI evaluation logic works as expected End-to-End Testing Send a test webhook with a real transcript Monitor each step of the workflow Confirm the final result is saved to Google Sheets Workflow Node Breakdown Webhook - Receives transcript data from your phone system Edit Fields2 - Extracts the transcript from the incoming data Evaluate Candidate - AI analysis using GPT-4o-mini to assess qualification Convert to JSON - Ensures proper JSON formatting with structured output parser Save to Google Sheets - Automatically logs results to your spreadsheet Customization Options Modify Evaluation Criteria Edit the system prompt in the "Evaluate Candidate" node Add or remove qualification requirements Adjust the scoring logic Change Output Format Modify the JSON schema in the "Structured Output Parser" node Update Google Sheets column mapping accordingly Add Additional Processing Insert nodes for email notifications Add Slack/Discord alerts for qualified candidates Integrate with your CRM or ATS system Troubleshooting Common Issues: OpenAI API Errors**: Check API key validity and billing status Google Sheets Not Updating**: Verify OAuth permissions and sheet access Webhook Not Receiving Data**: Confirm URL and POST format from your phone system AI Evaluation Inconsistencies**: Refine the system prompt with more specific criteria Usage Tips Monitor Token Usage**: OpenAI charges per token, so monitor your usage Regular Review**: Periodically review AI evaluations for accuracy Backup Data**: Export Google Sheets data regularly for backup Privacy Compliance**: Ensure transcript handling complies with local privacy laws Need Help with Implementation? For professional setup, customization, or troubleshooting of this workflow, contact: Robert - Ynteractive Solutions Email**: rbreen@ynteractive.com Website**: www.ynteractive.com LinkedIn**: linkedin.com/in/robert-interactive Specializing in AI-powered workflow automation, business process optimization, and custom integration solutions.
by Trung Tran
Automated AWS IAM Compliance Workflow for MFA Enforcement and Access Key Deactivation > This workflow leverages AWS IAM APIs and n8n automation to ensure strict security compliance by continuously monitoring IAM users for MFA (Multi-Factor Authentication) enforcement. .jpg) Who’s it for This workflow is designed for DevOps, Security, or Cloud Engineers responsible for maintaining IAM security compliance in AWS accounts. It's ideal for teams who want to enforce MFA usage and automatically disable access for non-compliant IAM users. How it works / What it does This automated workflow performs a daily check to detect IAM users without an MFA device and deactivate their access keys. Step-by-step: Daily scheduler: Triggers the workflow once a day. Get many users: Retrieves a list of all IAM users in the account. Get IAM User MFA Devices: Calls AWS API to get MFA device info for each user. Filter out IAM users with MFA: Keeps only users without any MFA device. Send warning message(s): Sends Slack alerts for users who do not have MFA enabled. Get User Access Key(s): Fetches access keys for each non-MFA user. Parse the list of user access key(s): Extracts and flattens key information like AccessKeyId, Status, and UserName. Filter out inactive keys: Keeps only active access keys for further action. Deactivate Access Key(s): Calls AWS API to deactivate each active key for non-MFA users. How to set up Configure AWS credentials in your environment (IAM role or AWS access key with required permissions). Connect Slack via the Slack node for alerting (set channel and credentials). Set the scheduler to your preferred frequency (e.g., daily at 9AM). Adjust any Slack message template or filtering conditions as needed. Requirements IAM user or role credentials with the following AWS IAM permissions: iam:ListUsers iam:ListMFADevices iam:ListAccessKeys iam:UpdateAccessKey Slack credentials (Bot token with chat:write permission). n8n environment with: Slack integration AWS credentials (set via environment or credentials manager) How to customize the workflow Alert threshold**: Instead of immediate deactivation, you can delay action (e.g., alert first, wait 24h, then disable). Change notification channel**: Modify the Slack node to send alerts to a different channel or add email integration. Whitelist exceptions**: Add a Set or IF node to exclude specific usernames (e.g., service accounts). Add audit logging**: Use Google Sheets, Airtable, or a database to log which users were flagged or had access disabled. Extend access checks**: Include console password check (GetLoginProfile) if needed.
by Samir Saci
Tags: ESL, English Learning, Podcasts, RSS, AI Exercises, ElevenLabs Context Hi! I’m Samir, Supply Chain Engineer and Data Scientist based in Paris, and founder of the startup LogiGreen. I created this workflow for my mother, who is currently learning English, to turn the BBC 6 Minute English podcast into ready-to-use English lessons. The lesson includes vocabulary, exercises and discussion questions along with links to access the podcast content (audio and transcript). > Use this assistant to automatically share English lessons from a renowned podcast. 📬 For business inquiries, you can find me on LinkedIn Who is this template for? This template is designed for: ESL teachers** who want a fresh, structured lesson every week from real-life audio Independent learners** who want a guided way to study English with podcasts Language schools or content creators** who send regular English lessons by email What does this workflow do? This workflow acts as an AI-powered English lesson generator from podcast episodes. Runs every Sunday at 20:00 using a Schedule Trigger and reads the BBC 6 Minute English RSS feed Checks a Data Table of archived episodes and filters out those already sent (using their guid) Keeps the latest unsent episode and loads its web page content via HTTP Parses the HTML in a Code node to extract the episode description, full transcript and BBC vocabulary list Calls three AI nodes (OpenAI) to generate: a motivational email hook message fill-in-the-blank vocabulary exercises discussion questions related to the topic Combines all vocabulary words and sends them to ElevenLabs to generate a slow-paced audio track for listening practice Builds a prettify HTML email that includes: title, description, hook, vocabulary list, exercises, discussion questions and resource links Sends the final lesson by email via the Gmail node, with the vocabulary audio attached For example, this is the latest email generated by the workflow: P.S.: You can customise the footer to your school or company identity. 🎥 Tutorial I advise you to check the tutorial on my YouTube channel for the details on how to set up the nodes and customise the content: Next Steps Follow the stickers to set up all the nodes: Replace the Data Table reference with your own (storing at least guid, title, link, processed_date) Set up your OpenAI credentials in the three Model nodes Set up your ElevenLabs credentials and choose a voice in the audio node Configure your Gmail credentials and recipient email address in the Send Email node Adapt the RSS feed URL if you want to track another podcast or source Customise the HTML email (colours, logo, footer text) in the Prepare Email Code node Adjust the schedule (time or frequency) if you prefer another cadence Submitted: 18 November 2025 Template designed with n8n version 1.116.2
by vinci-king-01
Product Price Monitor with Mailgun and MongoDB ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes multiple e-commerce sites, records weekly product prices in MongoDB, analyzes seasonal trends, and emails a concise report to retail stakeholders via Mailgun. It helps retailers make informed inventory and pricing decisions by providing up-to-date pricing intelligence. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted, desktop, or n8n.cloud) ScrapeGraphAI community node installed and activated MongoDB database (Atlas or self-hosted) Mailgun account with a verified domain Publicly reachable n8n Webhook URL (if self-hosted) Required Credentials ScrapeGraphAI API Key** – Enables web scraping across target sites MongoDB Credentials** – Connection string (MongoDB URI) with read/write access Mailgun API Key & Domain** – To send summary emails MongoDB Collection Schema | Field | Type | Example Value | Notes | |-----------------|----------|---------------------------|---------------------------------------------| | productId | String | SKU-12345 | Unique identifier you define | | productName | String | Women's Winter Jacket | Human-readable name | | timestamp | Date | 2024-09-15T00:00:00Z | Ingest date (automatically added) | | price | Number | 79.99 | Scraped price | | source | String | example-shop.com | Domain where price was scraped | How it works This workflow automatically scrapes multiple e-commerce sites, records weekly product prices in MongoDB, analyzes seasonal trends, and emails a concise report to retail stakeholders via Mailgun. It helps retailers make informed inventory and pricing decisions by providing up-to-date pricing intelligence. Key Steps: Webhook Trigger**: Starts the workflow on a scheduled HTTP call or manual trigger. Code (Prepare Products)**: Defines the list of SKUs/URLs to monitor. Split In Batches**: Processes products in manageable chunks to respect rate limits. ScrapeGraphAI (Scrape Price)**: Extracts price, availability, and currency from each product URL. Merge (Combine Results)**: Re-assembles all batch outputs into one dataset. MongoDB (Upsert Price History)**: Stores each price point for historical analysis. If (Seasonal Trend Check)**: Compares current price against historical average to detect anomalies. Set (Email Payload)**: Formats the trend report for email. Mailgun (Send Email)**: Emails weekly summary to specified recipients. Respond to Webhook**: Returns “200 OK – Report Sent” response for logging. Set up steps Setup Time: 15-20 minutes Install Community Node In n8n, go to “Settings → Community Nodes” and install @n8n-community/nodes-scrapegraphai. Create Credentials Add ScrapeGraphAI API key under Credentials. Add MongoDB credentials (type: MongoDB). Add Mailgun credentials (type: Mailgun). Import Workflow Download the JSON template, then in n8n click “Import” and select the file. Configure Product List Open the Code (Prepare Products) node and replace the example array with your product objects { id, name, url }. Adjust Cron/Schedule If you prefer a fully automated schedule, replace the Webhook with a Cron node (e.g., every Monday at 09:00). Verify MongoDB Collection Ensure the collection (default: productPrices) exists or let n8n create it on first run. Set Recipients In the Mailgun node, update the to, from, and subject fields. Execute Test Run Manually trigger the Webhook URL or run the workflow once to verify data flow and email delivery. Activate Toggle the workflow to “Active” so it runs automatically each week. Node Descriptions Core Workflow Nodes: Webhook** – Entry point that accepts a GET/POST call to start the job. Code (Prepare Products)** – Outputs an array of products to monitor. Split In Batches** – Limits scraping to N products per request to avoid banning. ScrapeGraphAI** – Scrapes the HTML of a product page and parses pricing data. Merge** – Re-combines batch results for streamlined processing. MongoDB** – Inserts or updates each product’s price history document. If** – Determines whether price deviates > X% from the season average. Set** – Builds an HTML/text email body containing the findings. Mailgun** – Sends the email via Mailgun REST API. Respond to Webhook** – Returns an HTTP response for logging/monitoring. Sticky Notes** – Provide in-workflow documentation (no execution). Data Flow: Webhook → Code → Split In Batches Split In Batches → ScrapeGraphAI → Merge Merge → MongoDB → If If (true) → Set → Mailgun → Respond to Webhook Customization Examples Change Scraping Frequency (Cron) // Cron node settings { "mode": "custom", "cronExpression": "0 6 * * 1,4" // Monday & Thursday 06:00 } Extend Data Points (Reviews Count, Stock) // In ScrapeGraphAI extraction config { "price": "css:span.price", "inStock": "css:div.availability", "reviewCount": "regex:\"(\\d+) reviews\"" } Data Output Format The workflow outputs structured JSON data: { "productId": "SKU-12345", "productName": "Women's Winter Jacket", "timestamp": "2024-09-15T00:00:00Z", "price": 79.99, "currency": "USD", "source": "example-shop.com", "trend": "5% below 3-month average" } Troubleshooting Common Issues ScrapeGraphAI returns empty data – Confirm selectors/XPath are correct; test with ScrapeGraphAI playground. MongoDB connection fails – Verify IP-whitelisting for Atlas or network connectivity for self-hosted instance. Mail not delivered – Check Mailgun logs for bounce or spam rejection, and ensure from domain is verified. Performance Tips Use smaller batch sizes (e.g., 5 URLs) to avoid target site rate-limit blocks. Cache static product info; scrape only fields that change (price, stock). Pro Tips: Integrate the IF node with n8n’s Slack node to push urgent price drops to a channel. Add a Function node to calculate moving averages for deeper analysis. Store raw HTML snapshots in S3/MinIO for auditability and debugging.