by vinci-king-01
Product Price Monitor with Pushover and Baserow ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes multiple e-commerce sites for selected products, analyzes weekly pricing trends, stores historical data in Baserow, and sends an instant Pushover notification when significant price changes occur. It is ideal for retailers who need to track seasonal fluctuations and optimize inventory or pricing strategies. Pre-conditions/Requirements Prerequisites An active n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed At least one publicly accessible webhook URL (for on-demand runs) A Baserow database with a table prepared for product data Pushover account and registered application Required Credentials ScrapeGraphAI API Key** – Enables web-scraping capabilities Baserow: Personal API Token** – Allows read/write access to your table Pushover: User Key & API Token** – Sends mobile/desktop push notifications (Optional) HTTP Basic Token or API Keys for any private e-commerce endpoints you plan to monitor Baserow Table Specification | Field Name | Type | Description | |------------|-----------|--------------------------| | Product ID | Number | Internal or SKU | | Name | Text | Product title | | URL | URL | Product page | | Price | Number | Current price (float) | | Currency | Single select (USD, EUR, etc.) | | Last Seen | Date/Time | Last price check | | Trend | Number | 7-day % change | How it works This workflow automatically scrapes multiple e-commerce sites for selected products, analyzes weekly pricing trends, stores historical data in Baserow, and sends an instant Pushover notification when significant price changes occur. It is ideal for retailers who need to track seasonal fluctuations and optimize inventory or pricing strategies. Key Steps: Webhook Trigger**: Manually or externally trigger the weekly price-check run. Set Node**: Define an array of product URLs and metadata. Split In Batches**: Process products one at a time to avoid rate limits. ScrapeGraphAI Node**: Extract current price, title, and availability from each URL. If Node**: Determine if price has changed > ±5 % since last entry. HTTP Request (Trend API)**: Retrieve seasonal trend scores (optional). Merge Node**: Combine scrape data with trend analysis. Baserow Nodes**: Upsert latest record and fetch historical data for comparison. Pushover Node**: Send alert when significant price movement detected. Sticky Notes**: Documentation and inline comments for maintainability. Set up steps Setup Time: 15-25 minutes Install Community Node: In n8n, go to “Settings → Community Nodes” and install ScrapeGraphAI. Create Baserow Table: Match the field structure shown above. Obtain Credentials: ScrapeGraphAI API key from your dashboard Baserow personal token (/account/settings) Pushover user key & API token Clone Workflow: Import this template into n8n. Configure Credentials in Nodes: Open each ScrapeGraphAI, Baserow, and Pushover node and select/enter the appropriate credential. Add Product URLs: Open the first Set node and replace the example array with your actual product list. Adjust Thresholds: In the If node, change the 5 value if you want a higher/lower alert threshold. Test Run: Execute the workflow manually; verify Baserow rows and the Pushover notification. Schedule: Add a Cron trigger or external scheduler to run weekly. Node Descriptions Core Workflow Nodes: Webhook** – Entry point for manual or API-based triggers. Set** – Holds the array of product URLs and meta fields. SplitInBatches** – Iterates through each product to prevent request spikes. ScrapeGraphAI** – Scrapes price, title, and currency from product pages. If** – Compares new price vs. previous price in Baserow. HTTP Request** – Calls a trend API (e.g., Google Trends) to get seasonal score. Merge** – Combines scraping results with trend data. Baserow (Upsert & Read)** – Writes fresh data and fetches historical price for comparison. Pushover** – Sends formatted push notification with price delta. StickyNote** – Documents purpose and hints within the workflow. Data Flow: Webhook → Set → SplitInBatches → ScrapeGraphAI ScrapeGraphAI → If True branch → HTTP Request → Merge → Baserow Upsert → Pushover False branch → Baserow Upsert Customization Examples Change Notification Channel to Slack // Replace the Pushover node with Slack { "channel": "#pricing-alerts", "text": 🚨 ${$json["Name"]} changed by ${$json["delta"]}% – now ${$json["Price"]} ${$json["Currency"]} } Additional Data Enrichment (Stock Status) // Add to ScrapeGraphAI's selector map { "stock": { "selector": ".availability span", "type": "text" } } Data Output Format The workflow outputs structured JSON data: { "ProductID": 12345, "Name": "Winter Jacket", "URL": "https://shop.example.com/winter-jacket", "Price": 79.99, "Currency": "USD", "LastSeen": "2024-11-20T10:34:18.000Z", "Trend": 12, "delta": -7.5 } Troubleshooting Common Issues Empty scrape result – Check if the product page changed its HTML structure; update CSS selectors in ScrapeGraphAI. Baserow “Row not found” errors – Ensure Product ID or another unique field is set as the primary key for upsert. Performance Tips Limit batch size to 5-10 URLs to avoid IP blocking. Use n8n’s built-in proxy settings if scraping sites with geo-restrictions. Pro Tips: Store historical JSON responses in a separate Baserow table for deeper analytics. Standardize currency symbols to avoid false change detections. Couple this workflow with an n8n Dashboard to visualize price trends in real-time.
by Garri
Description This workflow is an n8n-based automation that allows users to download TikTok/Reels videos without watermarks simply by sending the video link through a Telegram Bot. It uses a Telegram Trigger to receive the link from the user, then makes an HTTP request to a third-party API (tiktokio.com) to process and retrieve the download link. The workflow filters the results to find the Download without watermark link, downloads the video in MP4 format, and sends it back to the user directly in their Telegram chat. Key features: Supports the best available video quality (bestvideo+bestaudio). Automatically removes watermarks. Instant response directly in Telegram chat. Fully automated — no manual downloads required. How It Works Telegram Trigger The user sends a TikTok or Reels link to the Telegram bot. The workflow captures and stores the link for processing. HTTP Request – MediaDL API The link is sent via POST method to https://mediadl.app/api/download. The API processes the link and returns video file data. Wait Delay The workflow waits a few seconds to ensure the API response is fully ready. Edit Fields Extracts the video file URL from the API response. Additional Wait Delay Adds a short pause to avoid connection errors during the download process. HTTP Request – Proxy Download Downloads the MP4 video file directly from the filtered URL. Send Video via Telegram The downloaded video is sent back to the user in their Telegram chat. How to Set Up Create & Configure a Telegram Bot Open Telegram and search for BotFather. Send /newbot → choose a name & username for your bot. Copy the Bot Token provided — you’ll need it in n8n. Prepare Your n8n Environment Log in to your n8n instance (self-hosted or n8n Cloud). Go to Credentials → create new Telegram API credentials using your Bot Token. Import the Workflow In n8n, click Import and select the PROJECT_DOWNLOAD_TIKTOK_REELS.json file. Configure the Telegram Nodes In the Telegram Trigger and Send Video nodes, connect your Telegram API credentials. Configure the HTTP Request Nodes Ensure the Download2 and HTTP Request nodes have the correct URL and headers (pre-configured for mediadl.app). Make sure the responseFormat is set to file in the final download node. Activate the Workflow Toggle Activate in the top right corner of n8n. Test by sending a TikTok or Reels link to your bot — you should receive the no-watermark video in return.
by n8n Team
This workflow demonstrates how to connect an open-source model to a Basic LLM node. The workflow is triggered when a new manual chat message appears. The message is then run through a Language Model Chain that is set up to process text with a specific prompt to guide the model's responses. Note that open-source LLMs with a small number of parameters require slightly different prompting with more guidance to the model. You can change the default Mistral-7B-Instruct-v0.1 model to any other LLM supported by HuggingFace. You can also connect other nodes, such as Ollama. Note that to use this template, you need to be on n8n version 1.19.4 or later.
by siyad
This n8n workflow automates the process of monitoring inventory levels for Shopify products, ensuring timely updates and efficient stock management. It is designed to alert users when inventory levels are low or out of stock, integrating with Shopify's webhook system and providing notifications through Discord (can be changed to any messaging platform) with product images and details. Workflow Overview Webhook Node (Shopify Listener): This node is set up to listen for Shopify's inventory level webhook. It triggers the workflow whenever there is an update in the inventory levels. The webhook is configured in Shopify settings, where the n8n URL is specified to receive inventory level updates. Function Node (Inventory Check): This node processes the data received from the Shopify webhook. It extracts the available inventory and the inventory item ID, and determines whether the inventory is low (less than 4 items) or out of stock. Condition Nodes (Inventory Level Check): Two condition nodes follow the function node. One checks if the inventory is low (low_inventory equals true), and the other checks if the inventory is out of stock (out_of_stock equals true). GraphQL Node (Product Details Retrieval): Connected to the condition nodes, this node fetches detailed information about the product using Shopify's GraphQL API. It retrieves the product variant, title, current inventory quantity, and the first product image. HTTP Node (Discord Notification): The final node in the workflow sends a notification to Discord. It includes an embed with the product title, a warning message ("This product is running out of stock!"), the remaining inventory quantity, product variant details, and the product image. The notification ensures that relevant stakeholders are immediately informed about critical inventory levels.
by Oneclick AI Squad
Monitor Indian (NSE/BSE) and US stock markets with intelligent price alerts, cooldown periods, and multi-channel notifications (Email + Telegram). Automatically tracks price movements and sends alerts when stocks cross predefined upper/lower limits. Perfect for day traders, investors, and portfolio managers who need instant notifications for price breakouts and breakdowns. How It Works Market Hours Trigger - Runs every 2 minutes during market hours Read Stock Watchlist - Fetches your stock list from Google Sheets Parse Watchlist Data - Processes stock symbols and alert parameters Fetch Live Stock Price - Gets real-time prices from Twelve Data API Smart Alert Logic - Intelligent price checking with cooldown periods Check Alert Conditions - Validates if alerts should be triggered Send Email Alert - Sends detailed email notifications Send Telegram Alert - Instant mobile notifications Update Alert History - Records alert timestamps in Google Sheets Alert Status Check - Monitors workflow success/failure Success/Error Notifications - Admin notifications for monitoring Key Features: Smart Cooldown**: Prevents alert spam Multi-Market**: Supports Indian & US stocks Dual Alerts**: Email + Telegram notifications Auto-Update**: Tracks last alert times Error Handling**: Built-in failure notifications Setup Requirements: 1. Google Sheets Setup: Create a Google Sheet with these columns (in exact order): A**: symbol (e.g., TCS, AAPL, RELIANCE.BSE) B**: upper_limit (e.g., 4000) C**: lower_limit (e.g., 3600) D**: direction (both/above/below) E**: cooldown_minutes (e.g., 15) F**: last_alert_price (auto-updated) G**: last_alert_time (auto-updated) 2. API Keys & IDs to Replace: YOUR_GOOGLE_SHEET_ID_HERE - Replace with your Google Sheet ID YOUR_TWELVE_DATA_API_KEY - Get free API key from twelvedata.com YOUR_TELEGRAM_CHAT_ID - Your Telegram chat ID (optional) your-email@gmail.com - Your sender email alert-recipient@gmail.com - Alert recipient email 3. Stock Symbol Format: US Stocks**: Use simple symbols like AAPL, TSLA, MSFT Indian Stocks**: Use .BSE or .NSE suffix like TCS.NSE, RELIANCE.BSE 4. Credentials Setup in n8n: Google Sheets**: Service Account credentials Email**: SMTP credentials Telegram**: Bot token (optional) Example Google Sheet Data: symbol upper_limit lower_limit direction cooldown_minutes TCS.NSE 4000 3600 both 15 AAPL 180 160 both 10 RELIANCE.BSE 2800 2600 above 20 Output Example: Alert: TCS crossed the upper limit. Current Price: ₹4100, Upper Limit: ₹4000.
by George Zargaryan
Multichannel AI Assistant Demo for Chatwoot This simple n8n template demonstrates a Chatwoot integration that can: Receive new messages via a webhook. Retrieve conversation history. Process the message history into a format suitable for an LLM. Demonstrate an AI Assistant processing a user's query. Send the AI Assistant's response back to Chatwoot. Use Case: If you have multiple communication channels with your clients (e.g., Telegram, Instagram, WhatsApp, Facebook) integrated with Chatwoot, you can use this template as a starting point to build more sophisticated and tailored AI solutions that cover all channels at once. How it works A webhook receives the message created event from Chatwoot. The webhook data is then filtered to keep only the necessary information for a cleaner workflow. The workflow checks if the message is "incoming." This is crucial to prevent the assistant from replying to its own messages and creating endless loops. The conversation history is retrieved from Chatwoot via an API call using the HTTP Request node. This allows the assistant's interaction to be more natural and continuous without needing to store conversation history locally. A simple AI Assistant processes the conversation history and generates a response to the user based on its built-in knowledge base (see the prompt in the assistant node). The final HTTP Request node sends the AI-generated response back to the appropriate Chatwoot conversation. How to Use In Chatwoot, go to Settings → Integrations → Webhooks and add your n8n webhook URL. Be sure to select the message created event. In the HTTP Request nodes, replace the placeholder values: https://yourchatwooturl.com api_access_token You can find these values on your Chatwoot super admin page. The LLM node is configured to use OpenRouter. Add your OpenRouter credentials, or replace the node with your preferred LLM provider. Requirements An API key for OpenRouter or credentials for your preferred LLM provider. A Chatwoot account with at least one integrated channel and super admin access to obtain the api_access_token. Need Help Building Something More? Contact me on: Telegram:** @ninesfork LinkedIn:** George Zargaryan Happy Hacking! 🚀
by vinci-king-01
Product Price Monitor with Slack and Jira ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes multiple e-commerce sites, analyses weekly seasonal price trends, and notifies your team in Slack while opening Jira tickets for items that require price adjustments. It helps retailers plan inventory and pricing by surfacing actionable insights every week. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed Slack workspace & channel for notifications Jira Software project (cloud or server) Basic JavaScript knowledge for optional custom code edits Required Credentials ScrapeGraphAI API Key** – Enables web scraping Slack OAuth Access Token** – Required by the Slack node Jira Credentials** – Email & API token (cloud) or username & password (server) (Optional) Proxy credentials – If target websites block direct scraping Specific Setup Requirements | Resource | Purpose | Example | |----------|---------|---------| | Product URL list | Seed URLs to monitor | https://example.com/products-winter-sale | | Slack Channel | Receives trend alerts | #pricing-alerts | | Jira Project Key | Tickets are created here | ECOM | How it works This workflow automatically scrapes multiple e-commerce sites, analyses weekly seasonal price trends, and notifies your team in Slack while opening Jira tickets for items that require price adjustments. It helps retailers plan inventory and pricing by surfacing actionable insights every week. Key Steps: Webhook Trigger**: Kicks off the workflow via a weekly schedule or manual call. Set Product URLs**: Prepares the list of product pages to analyse. SplitInBatches**: Processes URLs in manageable batches to avoid rate limits. ScrapeGraphAI**: Extracts current prices, stock, and seasonality hints from each URL. Code (Trend Logic)**: Compares scraped prices against historical averages. If (Threshold Check)**: Determines if price deviations exceed ±10%. Slack Node**: Sends a formatted message to the pricing channel for each deviation. Jira Node**: Creates/updates a ticket linking to the product for further action. Merge**: Collects all batch results for summary reporting. Set up steps Setup Time: 15-20 minutes Install Community Nodes: In n8n, go to Settings → Community Nodes, search for “ScrapeGraphAI”, and install. Add Credentials: a. Slack → Credentials → New, paste your Bot/User OAuth token. b. Jira → Credentials → New, enter your domain, email/username, API token/password. c. ScrapeGraphAI → Credentials → New, paste your API key. Import Workflow: Upload or paste the JSON template into n8n. Edit the “Set Product URLs” Node: Replace placeholder URLs with your real product pages. Configure Schedule: Replace the Webhook Trigger with a Cron node (e.g., every Monday 09:00) or keep as webhook for manual runs. Map Jira Fields: In the Jira node, ensure Project Key, Issue Type (e.g., Task), and Summary fields match your instance. Test Run: Execute the workflow. Confirm Slack message appears and a Jira issue is created. Activate: Toggle the workflow to Active so it runs automatically. Node Descriptions Core Workflow Nodes: Webhook** – Default trigger, can be swapped with Cron for weekly automation. Set (Product URLs)** – Stores an array of product links for scraping. SplitInBatches** – Limits each ScrapeGraphAI call to five URLs to reduce load. ScrapeGraphAI** – Crawls and parses HTML, returning JSON with title, price, availability. Code (Trend Logic)** – Calculates percentage change vs. historical data (stored externally or hard-coded for demo). If (Threshold Check)** – Routes items above/below the set variance. Slack** – Posts a rich-format message containing product title, old vs. new price, and link. Jira* – Creates or updates a ticket with priority set to *Medium and assigns to the Pricing team lead. Merge** – Recombines batch streams for optional reporting or storage. Data Flow: Webhook → Set (Product URLs) → SplitInBatches → ScrapeGraphAI → Code (Trend Logic) → If → Slack / Jira → Merge Customization Examples Change Price Deviation Threshold // Code (Trend Logic) node const threshold = 0.05; // 5% instead of default 10% Alter Slack Message Template { "text": ${item.name} price changed from $${item.old} to $${item.new} (${item.diff}%)., "attachments": [ { "title": "Product Link", "title_link": item.url, "color": "#4E79A7" } ] } Data Output Format The workflow outputs structured JSON data: { "product": "Winter Jacket", "url": "https://example.com/winter-jacket", "oldPrice": 129.99, "newPrice": 99.99, "change": -23.06, "scrapedAt": "2023-11-04T09:00:00Z", "status": "Below Threshold", "slackMsgId": "A1B2C3", "jiraIssueKey": "ECOM-101" } Troubleshooting Common Issues ScrapeGraphAI returns empty data – Verify selectors; many sites use dynamic rendering, require a headless browser flag. Slack message not delivered – Check that the OAuth token scopes include chat:write; also confirm channel ID. Jira ticket creation fails – Field mapping mismatch; ensure Issue Type is valid and required custom fields are supplied. Performance Tips Batch fewer URLs (e.g., 3 instead of 5) to reduce timeout risk. Cache historical prices in an external DB (Postgres, Airtable) instead of reading large CSVs in the Code node. Pro Tips: Rotate proxies/IPs within ScrapeGraphAI to bypass aggressive e-commerce anti-bot measures. Add a Notion or Sheets node after Merge for historical logging. Use the Error Trigger workflow in n8n to alert when ScrapeGraphAI fails more than X times per run.
by Kevin Meneses
What this workflow does This workflow automatically audits web pages for SEO issues and generates an executive-friendly SEO report using AI. It is designed for marketers, founders, and SEO teams who want fast, actionable insights without manually reviewing HTML, meta tags, or SERP data. The workflow: Reads URLs from Google Sheets Scrapes page content using Decodo (reliable scraping, even on protected sites) Extracts key SEO elements (title, meta description, canonical, H1/H2, visible text) Uses an AI Agent to analyze the page and generate: Overall SEO status Top issues Quick wins Title & meta description recommendations Saves results to Google Sheets Sends a formatted HTML executive report by email (Gmail) Who this workflow is for This template is ideal for: SEO consultants and agencies SaaS marketing teams Founders monitoring their landing pages Content teams doing SEO quality control It focuses on on-page SEO fundamentals, not backlink analysis or technical crawling. Setup (step by step) 1. Google Sheets Create an input sheet with one URL per row Create an output sheet to store SEO results 2. Decodo Add your Decodo API credentials The URL is automatically taken from the input sheet 👉 Decodo – Web Scraper for n8n 3. AI Agent Connect your LLM credentials (OpenAI, Gemini, etc.) The prompt is already optimized for non-technical SEO summaries 4. Gmail Connect your Gmail account Set the recipient email address Emails are sent in clean HTML format Notes & disclaimer This is a community template Results depend on page accessibility and content structure It focuses on on-page SEO, not backlinks or rankings
by Vigh Sandor
Workflow Overview This advanced n8n workflow provides intelligent email automation with AI-generated responses. It combines four core functions: Monitors incoming emails via IMAP (e.g., SOGo) Sends instant Telegram notifications for all new emails Uses AI (Ollama LLM) to generate contextual, personalized auto-replies Sends confirmation notifications when auto-replies are sent Unlike traditional auto-responders, this workflow analyzes email content and creates unique, relevant responses for each message. Setup Instructions Prerequisites Before setting up this workflow, ensure you have: An n8n instance (self-hosted or cloud) with AI/LangChain nodes enabled IMAP email account credentials (e.g., SOGo, Gmail, Outlook) SMTP server access for sending emails Telegram Bot API credentials Telegram Chat ID where notifications will be sent Ollama installed locally or accessible via network (for AI model) The llama3.1 model downloaded in Ollama Step 1: Install and Configure Ollama Local Installation Install Ollama on your system: Visit https://ollama.ai and download the installer for your OS Follow installation instructions for your platform Download the llama3.1 model: ollama pull llama3.1 Verify the model is available: ollama list Start Ollama service (if not already running): ollama serve Test the model: ollama run llama3.1 "Hello, world!" Remote Ollama Instance If using a remote Ollama server: Note the server URL (e.g., http://192.168.1.100:11434) Ensure network connectivity between n8n and Ollama server Verify firewall allows connections on port 11434 Step 2: Configure IMAP Credentials Navigate to n8n Credentials section Create a new IMAP credential with the following information: Host: Your IMAP server address Port: Usually 993 for SSL/TLS Username: Your email address Password: Your email password or app-specific password Enable SSL/TLS: Yes (recommended) Security: Use STARTTLS or SSL/TLS Step 3: Configure SMTP Credentials Create a new SMTP credential in n8n Enter the following details: Host: Your SMTP server address (e.g., Postfix server) Port: Usually 587 (STARTTLS) or 465 (SSL) Username: Your email address Password: Your email password or app-specific password Secure connection: Enable based on your server configuration Allow unauthorized certificates: Enable if using self-signed certificates Step 4: Configure Telegram Bot Create a Telegram bot via BotFather: Open Telegram and search for @BotFather Send /newbot command Follow instructions to create your bot Save the API token provided by BotFather Obtain your Chat ID: Method 1: Send a message to your bot, then visit: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates Method 2: Use a Telegram Chat ID bot like @userinfobot Method 3: For group chats, add the bot to the group and check the updates Note: Group chat IDs are negative numbers (e.g., -1234567890123) Add Telegram API credential in n8n: Credential Type: Telegram API Access Token: Your bot token from BotFather Step 5: Configure Ollama API Credential In n8n Credentials section, create a new Ollama API credential Configure based on your setup: For local Ollama: Base URL is usually http://localhost:11434 For remote Ollama: Enter the server URL (e.g., http://192.168.1.100:11434) Test the connection to ensure n8n can reach Ollama Step 6: Import and Configure Workflow Import the workflow JSON into your n8n instance Update the following nodes with your specific information: Check Incoming Emails Node Verify IMAP credentials are connected Configure polling interval (optional): Default behavior checks on workflow trigger schedule Can be set to check every N minutes Set mailbox folder if needed (default is INBOX) Send Notification from Incoming Email Node Update chatId parameter with your Telegram Chat ID Replace -1234567890123 with your actual chat ID Customize notification message template if desired Current format includes: Sender, Subject, Date-Time Dedicate Filtering As No-Response Node Review spam filter conditions: Blocks emails from addresses containing "noreply" or "no-reply" Blocks emails with "newsletter" in subject line (case-insensitive) Add additional filtering rules as needed: Block specific domains Filter by keywords Whitelist/blacklist specific senders Ollama Model Node Verify Ollama API credential is connected Confirm model name: llama3.1:bf230501 (or adjust to your installed version) Context window set to 4096 tokens (sufficient for most emails) Can be adjusted based on your needs and hardware capabilities Basic LLM Chain Node Review the AI prompt engineering (pre-configured but customizable) Current prompt instructs the AI to: Read the email content Identify main topic in 2-4 words Generate a professional acknowledgment response Keep responses consistent and concise Modify prompt if you want different response styles Send Auto-Response in SMTP Node Verify SMTP credentials are connected Check fromEmail uses correct email address: Currently set to {{ $('Check Incoming Emails - IMAP (example: SOGo)').item.json.to }} This automatically uses the recipient address (your mailbox) Subject automatically includes "Re: " prefix with original subject Message text comes from AI-generated content Send Notification from Response Node Update chatId parameter (same as first notification node) This sends confirmation that auto-reply was sent Includes original email details and the AI-generated response text Step 7: Test the Workflow Perform initial configuration test: Test Ollama connectivity: curl http://localhost:11434/api/tags Verify all credentials are properly configured Check n8n has access to required network endpoints Execute a test run: Click "Execute Workflow" button in n8n Send a test email to your monitored inbox Use a clear subject and body for better AI response Verify workflow execution: First Telegram notification received (incoming email alert) AI processes the email content Auto-reply is sent to the original sender Second Telegram notification received (confirmation with AI response) Check n8n execution log for any errors Verify email delivery: Check if auto-reply arrived at sender's inbox Verify it's not marked as spam Review AI-generated content for appropriateness Step 8: Fine-Tune AI Responses Send various types of test emails: Different topics (inquiry, complaint, information request) Various email lengths (short, medium, long) Different languages if applicable Review AI-generated responses: Check if topic identification is accurate Verify response appropriateness Ensure tone is professional Adjust the prompt if needed: Modify topic word count (currently 2-4 words) Change response template Add language-specific instructions Include custom sign-offs or branding Step 9: Activate the Workflow Once testing is successful and AI responses are satisfactory: Toggle the workflow to "Active" state The workflow will now run automatically on the configured schedule Monitor initial production runs: Review first few auto-replies carefully Check Telegram notifications for any issues Verify SMTP delivery rates Set up monitoring: Enable n8n workflow error notifications Monitor Ollama resource usage Check email server logs periodically How to Use Normal Operation Once activated, the workflow operates fully automatically: Email Monitoring: The workflow continuously checks your IMAP inbox for new messages based on the configured polling interval or trigger schedule. Immediate Incoming Notification: When a new email arrives, you receive an instant Telegram notification containing: Sender's email address Email subject line Date and time received Note indicating it's from IMAP mailbox Intelligent Filtering: The workflow evaluates each email against spam filter criteria: Emails from "noreply" or "no-reply" addresses are filtered out Emails with "newsletter" in the subject line are filtered out Filtered emails receive notification but no auto-reply Legitimate emails proceed to AI response generation AI Response Generation: For emails that pass the filter: The AI reads the full email content Analyzes the main topic or purpose Generates a personalized acknowledgment Creates a professional response that: Thanks the sender References the specific topic Promises a personal follow-up Maintains professional tone Automatic Reply Delivery: The AI-generated response is sent via SMTP to the original sender with: Subject line: "Re: [Original Subject]" From address: Your monitored mailbox Body: AI-generated contextual message Response Confirmation: After the auto-reply is sent, you receive a second Telegram notification showing: Original email details (sender, subject, date) The complete AI-generated response text Confirmation of successful delivery Understanding AI Response Generation The AI analyzes emails intelligently: Example 1: Business Inquiry Incoming Email: "I'm interested in your consulting services for our Q4 project..." AI Topic Identification: "consulting services" Generated Response: "Dear Correspondent! Thank you for your message regarding consulting services. I will respond with a personal message as soon as possible. Have a nice day!" Example 2: Technical Support Incoming Email: "We're experiencing issues with the API integration..." AI Topic Identification: "API integration issues" Generated Response: "Dear Correspondent! Thank you for your message regarding API integration issues. I will respond with a personal message as soon as possible. Have a nice day!" Example 3: General Question Incoming Email: "Could you provide more information about pricing?" AI Topic Identification: "pricing information" Generated Response: "Dear Correspondent! Thank you for your message regarding pricing information. I will respond with a personal message as soon as possible. Have a nice day!" Customizing Filter Rules To modify which emails receive AI-generated auto-replies: Open the "Dedicate Filtering As No-Response" node Modify existing conditions or add new ones: Block specific domains: {{ $json.from.value[0].address }} Operation: does not contain Value: @spam-domain.com Whitelist VIP senders (only respond to specific people): {{ $json.from.value[0].address }} Operation: contains Value: @important-client.com Filter by subject keywords: {{ $json.subject.toLowerCase() }} Operation: does not contain Value: unsubscribe Combine multiple conditions: Use AND logic (all must be true) for stricter filtering Use OR logic (any can be true) for more permissive filtering Customizing AI Prompt To change how the AI generates responses: Open the "Basic LLM Chain" node Modify the prompt text in the "text" parameter Current structure: Context setting (read email, identify topic) Output format specification Rules for AI behavior Example modifications: Add company branding: Return only this response, filling in the [TOPIC]: Dear Correspondent! Thank you for reaching out to [Your Company Name] regarding [TOPIC]. I will respond with a personal message as soon as possible. Best regards, [Your Name] [Your Company Name] Make it more casual: Return only this response, filling in the [TOPIC]: Hi there! Thanks for your email about [TOPIC]. I'll get back to you personally soon. Cheers! Add urgency classification: Read the email and classify urgency (Low/Medium/High). Identify the main topic. Return: Dear Correspondent! Thank you for your message regarding [TOPIC]. Priority: [URGENCY] I will respond with a personal message as soon as possible. Customizing Telegram Notifications Incoming Email Notification: Open "Send Notification from Incoming Email" node Modify the "text" parameter Available variables: {{ $json.from }} - Full sender info {{ $json.from.value[0].address }} - Sender email only {{ $json.from.value[0].name }} - Sender name (if available) {{ $json.subject }} - Email subject {{ $json.date }} - Date received {{ $json.textPlain }} - Email body (use cautiously for privacy) {{ $json.to }} - Recipient address Response Confirmation Notification: Open "Send Notification from Response" node Modify to include additional information Reference AI response: {{ $('Basic LLM Chain').item.json.text }} Monitoring and Maintenance Daily Monitoring Check Telegram Notifications**: Review incoming email alerts and response confirmations Verify AI Quality**: Spot-check AI-generated responses for appropriateness Email Delivery**: Confirm auto-replies are being delivered (not caught in spam) Weekly Maintenance Review Execution Logs**: Check n8n execution history for errors or warnings Ollama Performance**: Monitor resource usage (CPU, RAM, disk space) Filter Effectiveness**: Assess if spam filters are working correctly Response Quality**: Review multiple AI responses for consistency Monthly Maintenance Update Ollama Model**: Check for new llama3.1 versions or alternative models Prompt Optimization**: Refine AI prompt based on response quality observations Credential Rotation**: Update passwords and API tokens for security Backup Configuration**: Export workflow and credentials (securely) Advanced Usage Multi-Language Support If you receive emails in multiple languages: Modify the AI prompt to detect language: Detect the email language. Generate response in the SAME language as the email. If English: [English template] If Hungarian: [Hungarian template] If German: [German template] Or use language-specific conditions in the filtering node Priority-Based Responses Generate different responses based on sender importance: Add an IF node after filtering to check sender domain Route VIP emails to a different LLM chain with priority messaging Standard emails use the normal AI chain Response Logging To maintain a record of all AI interactions: Add a database node (PostgreSQL, MySQL, etc.) after the auto-reply node Store: timestamp, sender, subject, AI response, delivery status Use for compliance, analytics, or training data A/B Testing AI Prompts Test different prompt variations: Create multiple LLM Chain nodes with different prompts Use a randomizer or round-robin approach Compare response quality and user feedback Optimize based on results Troubleshooting Notifications Not Received Problem: Telegram notifications not appearing Solutions: Verify Chat ID is correct (positive for personal chats, negative for groups) Check if bot has permissions to send messages Ensure bot wasn't blocked or removed from group Test Telegram API credential independently Review n8n execution logs for Telegram API errors AI Responses Not Generated Problem: Auto-replies sent but content is empty or error messages Solutions: Check Ollama service is running: ollama list Verify llama3.1 model is downloaded: ollama list Test Ollama directly: ollama run llama3.1 "Test message" Review Ollama API credential URL in n8n Check network connectivity between n8n and Ollama Increase context window if emails are very long Monitor Ollama logs for errors Poor Quality AI Responses Problem: AI generates irrelevant or inappropriate responses Solutions: Review and refine the prompt engineering Add more specific rules and constraints Provide examples in the prompt of good vs bad responses Adjust topic word count (increase from 2-4 to 3-6 words) Test with different Ollama models (e.g., llama3.1:70b for better quality) Ensure email content is being passed correctly to AI Auto-Replies Not Sent Problem: Workflow executes but emails not delivered Solutions: Verify SMTP credentials and server connectivity Check fromEmail address is correct Review SMTP server logs for errors Test SMTP sending independently Ensure "Allow unauthorized certificates" is enabled if needed Check if emails are being filtered by spam filters Verify SPF/DKIM records for your domain High Resource Usage Problem: Ollama consuming excessive CPU/RAM Solutions: Reduce context window size (from 4096 to 2048) Use a smaller model variant (llama3.1:8b instead of default) Limit concurrent workflow executions in n8n Add delay/throttling between email processing Consider using a remote Ollama instance with better hardware Monitor email volume and processing time IMAP Connection Failures Problem: Workflow can't connect to email server Solutions: Verify IMAP credentials are correct Check if IMAP is enabled on email account Ensure SSL/TLS settings match server requirements For Gmail: enable "Less secure app access" or use App Passwords Check firewall allows outbound connections on IMAP port (993) Test IMAP connection using email client (Thunderbird, Outlook) Workflow Not Triggering Problem: Workflow doesn't execute automatically Solutions: Verify workflow is in "Active" state Check trigger node configuration and schedule Review n8n system logs for scheduler issues Ensure n8n instance has sufficient resources Test manual execution to isolate trigger issues Check if n8n workflow execution queue is backed up Workflow Architecture Node Descriptions Check Incoming Emails - IMAP: Polls email server at regular intervals to retrieve new messages from the configured mailbox. Send Notification from Incoming Email: Immediately sends formatted notification to Telegram for every new email detected, regardless of spam status. Dedicate Filtering As No-Response: Evaluates emails against spam filter criteria to determine if AI processing should occur. No Operation: Placeholder node for filtered emails that should not receive an auto-reply (spam, newsletters, automated messages). Ollama Model: Provides the AI language model (llama3.1) used for natural language processing and response generation. Basic LLM Chain: Executes the AI prompt against the email content to generate contextual auto-reply text. Send Auto-Response in SMTP: Sends the AI-generated acknowledgment email back to the original sender via SMTP server. Send Notification from Response: Sends confirmation to Telegram showing the auto-reply was successfully sent, including the AI-generated content. AI Processing Pipeline Email Content Extraction: Email body text is extracted from IMAP data Context Loading: Email content is passed to LLM with prompt instructions Topic Analysis: AI identifies main subject or purpose in 2-4 words Template Population: AI fills response template with identified topic Output Formatting: Response is formatted and cleaned for email delivery Quality Assurance: n8n validates response before sending
by Yang
Who's it for This workflow is perfect for marketers, social media managers, recruiters, sales teams, and researchers who need to collect and organize public profile data from TikTok and LinkedIn. Whether you're building influencer databases, enriching CRM data, conducting competitor research, or gathering prospect information, this workflow automates the entire data extraction and storage process. What it does This AI-powered Telegram bot automatically scrapes public profile data from TikTok and LinkedIn, then saves it directly to Google Sheets. Simply send a TikTok username or LinkedIn profile URL via text or voice message, and the workflow handles everything: For TikTok profiles: Username and verification status Follower, following, and friend counts Total hearts (likes) and video count Bio link and secure user ID For LinkedIn profiles: Full name and profile picture Location and follower count Bio/about section Recent posts activity link All data is automatically organized into separate Google Sheets tabs for easy reference and analysis. You receive an email notification when extraction is complete. How it works The workflow uses an AI Agent as an intelligent router that determines which platform to scrape based on your input. Here's the flow: Input Processing: Send a message via Telegram (text or voice) Voice Transcription: If you send a voice note, OpenAI Whisper transcribes it to text AI Routing: The agent identifies whether it's TikTok or LinkedIn Profile Scraping: Calls Dumpling AI's specialized scraper for that platform Data Extraction: Parses the profile metrics and details Database Storage: Saves all data to the appropriate Google Sheets tab Confirmation: Sends an email notification when complete The AI agent ensures proper tool pairing - it always scrapes first, then saves, preventing partial data or errors. Setup Requirements Accounts & Credentials Needed: Telegram Bot Token (create via @BotFather) OpenAI API Key (for voice transcription and AI routing) Dumpling AI API Key (for profile scraping) Google Sheets OAuth2 credentials Gmail OAuth2 credentials (for notifications) Google Sheets Structure: Create a spreadsheet with two tabs: TikTok Tab - Columns: Username verified secUid bioLink followerCount followingCount heartCount videoCount friendCount LinkedIn Tab - Columns: name image location followers about recentPosts link How to set up Step 1: Create Telegram Bot Open Telegram and message @BotFather Use /newbot command and follow prompts Save your bot token for later Step 2: Configure Credentials Add Telegram bot token to "Receive Telegram Message" node Add OpenAI API key to "OpenAI Chat Model" and "Transcribe Audio" nodes Add Dumpling AI credentials as HTTP Header Auth Connect Google Sheets OAuth2 Connect Gmail OAuth2 Step 3: Set Up Google Sheets Create a new Google Spreadsheet Create two tabs: "TikTok" and "LinkedIn" Add column headers as specified above Copy the spreadsheet ID from the URL Step 4: Update Workflow Replace Google Sheets document ID in both database saver nodes Update email address in "Send Completion Email" node Remove personal credential names ("Nneka") Step 5: Test the Workflow Activate the workflow Message your bot with: "Scrape TikTok profile: @charlidamelio" Or try: "Extract this LinkedIn: https://www.linkedin.com/in/example" Check your Google Sheets for the data How to customize Add More Social Platforms: Create new scraper/saver tool pairs for Instagram, Twitter/X, or YouTube by: Adding new HTTP Request Tool nodes for scraping Adding corresponding Google Sheets Tool nodes Updating the AI Agent's system prompt with new protocols Enhance Voice Input: Add language detection for multilingual voice notes Implement speaker identification for team usage Add voice response capability Advanced Data Enrichment: Chain multiple profile lookups for followers Add sentiment analysis on bios and recent posts Implement automatic categorization/tagging Notification Improvements: Send results directly to Telegram instead of email Add Slack notifications for team collaboration Create detailed extraction reports with statistics Batch Processing: Modify to accept CSV files with multiple profiles Add rate limiting to avoid API throttling Implement queue system for large-scale scraping
by Jitesh Dugar
Automated Customer Statement Generator with Risk Analysis & Credit Monitoring Transform account statement management from hours to minutes - automatically compile transaction histories, calculate aging analysis, monitor credit limits, assess payment risk, and deliver professional PDF statements while syncing with accounting systems and alerting your team about high-risk accounts. What This Workflow Does Revolutionizes customer account management with intelligent statement generation, credit monitoring, and risk assessment: Webhook-Triggered Generation** - Automatically creates statements from accounting systems, CRM updates, or scheduled monthly triggers Smart Data Validation** - Verifies transaction data, validates account information, and ensures statement accuracy before generation Running Balance Calculation** - Automatically computes running balances through all transactions with opening and closing balance tracking Comprehensive Aging Analysis** - Calculates outstanding balances by age buckets (Current, 31-60 days, 61-90 days, 90+ days) Overdue Detection & Highlighting** - Automatically identifies overdue amounts with visual color-coded alerts on statements Professional HTML Design** - Creates beautifully branded statements with modern layouts, aging breakdowns, and payment information PDF Conversion** - Transforms HTML into print-ready, professional-quality PDF statements with preserved formatting Automated Email Delivery** - Sends branded emails to customers with PDF attachments and account summary details Google Drive Archival** - Automatically saves statements to organized folders with searchable filenames by account Credit Limit Monitoring** - Tracks credit utilization, detects over-limit accounts, and generates alerts at 75%, 90%, and 100%+ thresholds Risk Scoring System** - Calculates 0-100 risk scores based on payment behavior, aging, credit utilization, and overdue patterns Payment Behavior Analysis** - Tracks days since last payment, average payment time, and payment reliability trends Automated Recommendations** - Generates prioritized action items like "escalate to collections" or "suspend new credit" Accounting System Integration** - Syncs statement delivery, balance updates, and risk assessments to QuickBooks, Xero, or FreshBooks Conditional Team Notifications** - Different Slack alerts for overdue accounts (urgent) vs current accounts (standard) with risk metrics Transaction History Table** - Detailed itemization of all charges, payments, and running balances throughout statement period Multiple Payment Options** - Includes bank details, online payment links, and account manager contact information Key Features Automatic Statement Numbering**: Generates unique sequential statement numbers with format STMT-YYYYMM-AccountNumber for easy tracking and reference Aging Bucket Analysis**: Breaks down outstanding balances into current (0-30 days), 31-60 days, 61-90 days, and 90+ days overdue categories Credit Health Dashboard**: Visual indicators show credit utilization percentage, available credit, and over-limit warnings in statement Risk Assessment Engine**: Analyzes multiple factors including overdue amounts, credit utilization, payment frequency to calculate comprehensive risk score Payment Behavior Tracking**: Monitors days since last payment, identifies patterns like "Excellent - Pays on Time" or "Poor - Chronic Late Payment" Intelligent Recommendations**: Automatically generates prioritized action items based on account status, risk level, and payment history Transaction Running Balance**: Shows balance after each transaction so customers can verify accuracy and reconcile their records Over-Limit Detection**: Immediate alerts when accounts exceed credit limits with escalation recommendations to suspend new charges Good Standing Indicators**: Visual green checkmarks and positive messaging for accounts with no overdue balances Account Manager Details**: Includes dedicated contact person for questions, disputes, and payment arrangements Dispute Process Documentation**: Clear instructions on how customers can dispute transactions within required timeframe Multi-Currency Support**: Handles USD, EUR, GBP, INR with proper currency symbols and formatting throughout statement Accounting System Sync**: Logs statement delivery, balance updates, and risk assessments in QuickBooks, Xero, FreshBooks, or Wave Conditional Workflow Routing**: Different automation paths for high-risk overdue accounts vs healthy current accounts Activity Notes Generation**: Creates detailed CRM notes with account summary, recommendations, and delivery confirmation Print-Optimized PDFs**: A4 format with proper margins and color preservation for professional printing and digital distribution Perfect For B2B Companies with Trade Credit** - Manufacturing, wholesale, distribution businesses offering net-30 or net-60 payment terms Professional Services Firms** - Consulting, legal, accounting firms with monthly retainer clients and time-based billing Subscription Services (B2B)** - SaaS platforms, software companies, membership organizations with recurring monthly charges Equipment Rental Companies** - Construction equipment, party rentals, medical equipment with ongoing rental agreements Import/Export Businesses** - International traders managing accounts receivable across multiple customers and currencies Healthcare Billing Departments** - Medical practices, clinics, hospitals tracking patient account balances and payment plans Educational Institutions** - Private schools, universities, training centers with tuition payment plans and installments Telecommunications Providers** - Phone, internet, cable companies sending monthly account statements to business customers Utilities & Energy Companies** - Electric, gas, water utilities managing commercial account statements and collections Property Management Companies** - Real estate firms tracking tenant charges, rent payments, and maintenance fees Credit Card Companies & Lenders** - Financial institutions providing detailed account activity and payment due notifications Wholesale Suppliers** - Distributors supplying restaurants, retailers, contractors on credit terms with monthly settlements Commercial Insurance Agencies** - Agencies tracking premium payments, policy charges, and outstanding balances Construction Contractors** - General contractors billing for progress payments, change orders, and retention releases What You Will Need Required Integrations HTML to PDF API - PDF conversion service (API key required) - supports HTML/CSS to PDF API, PDFShift, or similar providers (approximately 1-5 cents per statement) Gmail or SMTP - Email delivery service for sending statements to customers (OAuth2 or SMTP credentials) Google Drive - Cloud storage for statement archival and compliance record-keeping (OAuth2 credentials required) Optional Integrations Slack Webhook** - Team notifications for overdue and high-risk accounts (free incoming webhook) Accounting Software Integration** - QuickBooks, Xero, FreshBooks, Zoho Books API for automatic statement logging and balance sync CRM Integration** - HubSpot, Salesforce, Pipedrive for customer activity tracking and collections workflow triggers Payment Gateway** - Stripe, PayPal, Square payment links for one-click online payment from statements Collections Software** - Integrate with collections management platforms for automatic escalation of high-risk accounts SMS Notifications** - Twilio integration for payment due reminders and overdue alerts via text message Quick Start Import Template - Copy JSON workflow and import into your n8n instance Configure PDF Service - Add HTML to PDF API credentials in the "HTML to PDF" node Setup Gmail - Connect Gmail OAuth2 credentials in "Send Email to Customer" node and update sender email Connect Google Drive - Add Google Drive OAuth2 credentials and set folder ID for statement archival Customize Company Info - Edit "Enrich with Company Data" node to add company name, address, contact details, bank information Configure Credit Limits - Set default credit limits and payment terms for your customer base Adjust Risk Thresholds - Modify risk scoring logic in "Credit Limit & Risk Analysis" node based on your policies Update Email Template - Customize email message in Gmail node with your branding and messaging Configure Slack - Add Slack webhook URLs in both notification nodes (overdue and current accounts) Connect Accounting System - Replace code in "Update Accounting System" node with actual API call to QuickBooks/Xero/FreshBooks Test Workflow - Submit sample transaction data via webhook to verify PDF generation, email delivery, and notifications Schedule Monthly Run - Set up scheduled trigger for automatic end-of-month statement generation for all customers Customization Options Custom Aging Buckets** - Modify aging periods to match your business (e.g., 0-15, 16-30, 31-45, 46-60, 60+ days) Industry-Specific Templates** - Create different statement designs for different customer segments or business units Multi-Language Support** - Translate statement templates for international customers (Spanish, French, German, Mandarin) Dynamic Credit Terms** - Configure different payment terms by customer type (VIP net-45, standard net-30, new customers due on receipt) Late Fee Calculation** - Add automatic late fee calculation and inclusion for overdue balances Payment Plan Tracking** - Track installment payment plans with remaining balance and next payment due Interest Charges** - Calculate and add interest charges on overdue balances based on configurable rates Partial Payment Allocation** - Show how partial payments were applied across multiple invoices Customer Portal Integration** - Generate secure links for customers to view statements and make payments online Batch Processing** - Process statements for hundreds of customers simultaneously with bulk email delivery White-Label Branding** - Create different branded templates for multiple companies or subsidiaries Custom Risk Models** - Adjust risk scoring weights based on your industry and historical payment patterns Collections Workflow Integration** - Automatically create tasks in collections software for high-risk accounts Early Payment Incentives** - Highlight early payment discounts or prompt payment benefits on statements Dispute Management** - Track disputed transactions and adjust balances accordingly with audit trail Expected Results 90% time savings** - Reduce statement creation from 2-3 hours to 5 minutes per customer 100% accuracy** - Eliminate calculation errors and missing transactions through automated processing 50% faster payment collection** - Professional statements with clear aging drive faster customer payments Zero filing time** - Automatic Google Drive organization with searchable filenames by account 30% reduction in overdue accounts** - Proactive credit monitoring and risk alerts prevent bad debt Real-time risk visibility** - Instant identification of high-risk accounts before they become uncollectible Automated compliance** - Complete audit trail with timestamped statement delivery and accounting sync Better customer communication** - Professional statements improve customer satisfaction and reduce disputes Reduced bad debt write-offs** - Early warning system catches payment issues before they escalate Improved cash flow** - Faster statement delivery and payment reminders accelerate cash collection Pro Tips Schedule Monthly Batch Generation** - Run workflow automatically on last day of month to generate statements for all customers simultaneously Customize Aging Thresholds** - Adjust credit alert levels (75%, 90%, 100%) based on your risk tolerance and industry norms Segment Customer Communications** - Use different email templates for VIP customers vs standard customers vs delinquent accounts Track Payment Patterns** - Monitor days-to-pay metrics by customer to identify chronic late payers proactively Integrate with Collections** - Connect workflow to collections software to automatically escalate 90+ day accounts Include Payment Portal Links** - Add unique payment links to each statement for one-click online payment Automate Follow-Up Reminders** - Build workflow extension to send payment reminders 7 days before due date Create Executive Dashboards** - Export risk scores and aging data to business intelligence tools for trend analysis Document Dispute Resolutions** - Log all disputed transactions in accounting system with resolution notes Test with Sample Data First** - Validate aging calculations with known test data before processing real customer accounts Archive Statements for Compliance** - Maintain 7-year archive in Google Drive organized by year and customer Monitor Credit Utilization Trends** - Track credit utilization changes month-over-month to predict cash flow needs Benchmark Against Industry** - Compare your DSO and bad debt ratios to industry averages to identify improvement areas Personalize Account Manager Info** - Assign dedicated contacts to customers and include their direct phone and email Use Descriptive Transaction Details** - Ensure transaction descriptions clearly explain charges to reduce disputes Business Impact Metrics Track these key metrics to measure workflow success: Statement Generation Time** - Measure average minutes from trigger to delivered statement (target: under 5 minutes) Statement Volume Capacity** - Count monthly statements generated through automation (expect 10-20x increase in capacity) Aging Calculation Accuracy** - Track statements with aging errors (target: 0% error rate) Days Sales Outstanding (DSO)** - Monitor average days to collect payment (expect 15-30% reduction) Bad Debt Write-Offs** - Track uncollectible accounts as percentage of revenue (expect 30-50% reduction) Collection Rate** - Monitor percentage of invoices collected within terms (expect 10-20% improvement) Customer Disputes** - Count statement disputes and billing inquiries (expect 50-70% reduction) Over-Limit Accounts** - Track number of accounts exceeding credit limits (early detection prevents losses) High-Risk Account Identification** - Measure days between risk detection and collection action (target: within 48 hours) Cash Flow Improvement** - Calculate working capital improvement from faster collections (typical: 20-35% improvement) Template Compatibility Compatible with n8n version 1.0 and above Works with n8n Cloud and Self-Hosted instances Requires HTML to PDF API service subscription (1-5 cents per statement) No coding required for basic setup Fully customizable for industry-specific requirements Integrates with major accounting platforms via API Multi-currency and multi-language ready Supports batch processing for large customer bases Compliant with financial record-keeping regulations Ready to transform your account receivables management? Import this template and start generating professional statements with credit monitoring, risk assessment, and automated collections alerts - improving your cash flow, reducing bad debt, and freeing your accounting team to focus on strategic financial management!
by Kyriakos Papadopoulos
Auto-Summarize Blog Posts to Social Media with Gemma and Postiz This workflow automates fetching the latest post from a Blogspot RSS feed, summarizes it with an LLM (e.g., Gemma via Ollama), extracts and uploads an image, generates three relevant hashtags, and posts to Facebook, LinkedIn, X (Twitter), and Instagram via the Postiz API. It ensures content fits platform limits (e.g., 280 characters for X) and prevents duplicates using hashing. Pros: Efficient for content creators Local LLM ensures privacy Customizable for any RSS/blog source Cons: Dependent on stable APIs (Postiz/social platforms) LLM outputs may vary in quality without human review Target Audience: Bloggers, content marketers, or social media managers looking to automate cross-platform posting from RSS sources, especially those focused on niches like health, tech, or personal development. Ideal for users with technical setup skills for self-hosting. Customization Options: Adapt prompts in "Generate Summary and Hashtags with LLM" for tone/style (e.g., professional vs. casual). Modify maxChars/hashtag reserve in "Calculate Summary Character Limit" for different platforms. Extend for multiple RSS feeds by adjusting "Calculate Summary Character Limit" array. Add error handling (e.g., IF node after "Create and Post Content via Postiz API") for API failures. Disclaimer: This template is designed for self-hosted n8n instances to leverage local Ollama for privacy. For cloud use, modify as follows: 1) Use an n8n cloud account, 2) Replace Ollama with a cloud API-based LLM like ChatGPT in the "Configure Local LLM Model (Ollama)" node, 3) Switch to cloud-hosted Postiz in the HTTP Request node. Template Image: How it works Set the RSS feed URL in "Set RSS Feed URLs". Fetch the latest post via RSS. Normalize fields and calculate the maximum summary length. Use the LLM to summarize the text, append hashtags, and include the link. Extract and process an image from the post HTML. Validate inputs and post to social platforms via the Postiz API. Setup Instructions Install n8n (self-hosted recommended for Ollama integration). Set up Ollama with the Gemma (or a similar) model using "Ollama Model" credentials. Add Postiz API credentials in the "Create and Post Content via Postiz API" node. Replace placeholders: RSS URL in "Set News RSS Feeds" Integration IDs in the Postiz HTTP body (Optional) Add error handling for API failures. Activate the workflow and test with a sample post. Uncertainties Changes in social media APIs may break posting functionality. LLM output consistency depends on model choice and prompt configuration. Required n8n Version Tested on n8n v1.107.3 (self-hosted). Works with the community node n8n-nodes-langchain. Resources n8n Docs: RSS Feed Read n8n Docs: HTTP Request Ollama Setup Postiz Documentation