by vinci-king-01
Influencer Content Monitor with ScrapeGraphAI Analysis and ROI Tracking ๐ฏ Target Audience Marketing managers and brand managers Influencer marketing agencies Social media managers Digital marketing teams Brand partnerships coordinators Marketing analysts and strategists Campaign managers ROI and performance analysts ๐ Problem Statement Manual monitoring of influencer campaigns is time-consuming and often misses critical performance insights, brand mentions, and ROI calculations. This template solves the challenge of automatically tracking influencer content, analyzing engagement metrics, detecting brand mentions, and calculating campaign ROI using AI-powered analysis and automated workflows. ๐ง How it Works This workflow automatically monitors influencer profiles and content using ScrapeGraphAI for intelligent analysis, tracks brand mentions and sponsored content, calculates performance metrics, and provides comprehensive ROI analysis for marketing campaigns. Key Components Daily Schedule Trigger - Runs automatically every day at 9:00 AM to monitor influencer campaigns ScrapeGraphAI - Influencer Profiles - Uses AI to extract profile data and recent posts from Instagram Content Analyzer - Analyzes post content for engagement rates and quality scoring Brand Mention Detector - Identifies brand mentions and sponsored content indicators Campaign Performance Tracker - Tracks campaign metrics and KPIs Marketing ROI Calculator - Calculates return on investment for campaigns ๐ Data Analysis Specifications The template analyzes and tracks the following metrics: | Metric Category | Data Points | Description | Example | |----------------|-------------|-------------|---------| | Profile Data | Username, Followers, Following, Posts Count, Bio, Verification Status | Basic influencer profile information | "@influencer", "100K followers", "Verified" | | Post Analysis | Post URL, Caption, Likes, Comments, Date, Hashtags, Mentions | Individual post performance data | "5,000 likes", "150 comments" | | Engagement Metrics | Engagement Rate, Content Quality Score, Performance Tier | Calculated performance indicators | "3.2% engagement rate", "High performance" | | Brand Detection | Brand Mentions, Sponsored Content, Mention Count | Brand collaboration tracking | "Nike mentioned", "Sponsored post detected" | | Campaign Performance | Total Reach, Total Engagement, Average Engagement, Performance Score | Overall campaign effectiveness | "50K total reach", "85.5 performance score" | | ROI Analysis | Total Investment, Estimated Value, ROI Percentage, Cost per Engagement | Financial performance metrics | "$2,500 investment", "125% ROI" | ๐ ๏ธ Setup Instructions Estimated setup time: 20-25 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Instagram accounts to monitor (influencer usernames) Campaign budget and cost data for ROI calculations Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Schedule Trigger Configure the daily schedule (default: 9:00 AM UTC) Adjust timezone to match your business hours Set appropriate frequency for your monitoring needs 4. Configure Influencer Monitoring Update the websiteUrl parameter with target influencer usernames Customize the user prompt to extract specific profile data Set up monitoring for multiple influencers if needed Configure brand keywords for mention detection 5. Customize Brand Detection Update brand keywords in the Brand Mention Detector node Add sponsored content indicators (#ad, #sponsored, etc.) Configure brand mention sensitivity levels Set up competitor brand monitoring 6. Configure ROI Calculations Update cost estimates in the Marketing ROI Calculator Set value per engagement and reach metrics Configure campaign management costs Adjust ROI calculation parameters 7. Test and Validate Run the workflow manually with test data Verify all analysis steps complete successfully Check data accuracy and calculation precision Validate ROI calculations with actual campaign data ๐ Workflow Customization Options Modify Monitoring Parameters Adjust monitoring frequency (hourly, daily, weekly) Add more social media platforms (TikTok, YouTube, etc.) Customize engagement rate calculations Modify content quality scoring algorithms Extend Brand Detection Add more sophisticated brand mention detection Implement sentiment analysis for brand mentions Include competitor brand monitoring Add automated alert systems for brand mentions Customize Performance Tracking Modify performance tier thresholds Add more detailed engagement metrics Implement trend analysis and forecasting Include audience demographic analysis Output Customization Add integration with marketing dashboards Implement automated reporting systems Create alert systems for performance drops Add campaign comparison features ๐ Use Cases Influencer Campaign Monitoring**: Track performance of influencer partnerships Brand Mention Detection**: Monitor brand mentions across influencer content ROI Analysis**: Calculate return on investment for marketing campaigns Competitive Intelligence**: Monitor competitor brand mentions Performance Optimization**: Identify top-performing content and influencers Campaign Reporting**: Generate automated reports for stakeholders ๐จ Important Notes Respect Instagram's terms of service and rate limits Implement appropriate delays between requests to avoid rate limiting Regularly review and update brand keywords and detection parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider data privacy and compliance requirements Ensure accurate cost data for ROI calculations ๐ง Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Instagram access issues: Check account accessibility and rate limits Brand detection false positives: Adjust keyword sensitivity ROI calculation errors: Verify cost and value parameters Schedule trigger failures: Check timezone and cron expression Data parsing errors: Review the Code node's JavaScript logic Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Instagram API documentation and best practices Influencer marketing analytics best practices ROI calculation methodologies and standards
by Jitesh Dugar
Streamline your automotive service center's operations with this comprehensive automation. This workflow manages the entire customer lifecycleโfrom automated service reminders and instant appointment booking via WhatsApp to mileage tracking and full service history logsโall synchronized in real-time with Google Sheets and WATI. ๐ฏ What This Workflow Does This template transforms WhatsApp into a self-service hub for vehicle owners and a management tool for garage staff: โฐ Automated Reminders Service Due: Every morning at 9 AM, the bot scans your database and identifies vehicles due for service based on either their next service date or mileage threshold, sending a personalized WhatsApp alert to the owner. Appointment Prep: At 8 AM daily, the system reminds owners of their confirmed appointments for the following day, reducing no-shows. ๐ Instant Booking & Confirmation When an owner replies with book, the bot dynamically generates available slots for the next 3 weekdays. Owners pick a slot (e.g., confirm 1), and the appointment is instantly logged in your Appointments sheet with a confirmation message sent back. ๐ Mileage & Status Management Owners can update their odometer reading anytime by sending mileage <km>. The bot automatically recalculates their next service point and warns them if they are approaching a critical maintenance interval. ๐ History & Staff Tools Customer View: Owners can request their vehicle's full status card or a detailed history of their last 5 service records. Staff Logging: Garage technicians can log completed work using a simple command (e.g., logservice MH12... 52000 Oil Change 2500), which automatically updates the service history and resets the reminder cycle. โจ Key Features Intelligent Odometer Tracking:** Predicts service needs by comparing current mileage against individual service intervals. Dynamic Slot Generation:** Automatically avoids weekends and generates morning/afternoon options to simplify the booking experience. Command-Based Routing:** Uses an intuitive keyword system (book, status, history, mileage) to handle multiple customer requests simultaneously. Duplicate Prevention:** Tracks "reminders sent" to ensure customers aren't pestered with multiple alerts for the same service period. ๐ผ Perfect For Independent Garages:** Providing a "dealership-level" digital experience without expensive software. Fleet Managers:** Tracking maintenance schedules for corporate vehicles. Car Dealerships:** Automating follow-ups for post-purchase service packages. Motorcycle Repair Shops:** Managing quick oil changes and seasonal check-ups. ๐ง What You'll Need Required Integrations WATI:** For WhatsApp messaging and handling incoming customer commands. Google Sheets:** To act as your 3-part database (Vehicles, Appointments, and ServiceHistory). โ๏ธ Configuration Steps Google Sheet Setup: Create a sheet with three tabs: Vehicles Appointments ServiceHistory Document ID: Replace YOUR_GOOGLE_SHEET_ID in every Google Sheets node with your specific sheet's ID. Credentials: Connect your Google Sheets OAuth2 and WATI account credentials in n8n. Ready to automate your service bay? Import this template, connect your Google Sheet, and start sending intelligent reminders to your customers today!
by Dariusz Koryto
Get automated weather updates delivered directly to your Telegram chat at scheduled intervals. This workflow fetches current weather data from OpenWeatherMap and sends formatted weather reports via a Telegram bot. Use Cases Daily morning weather briefings Regular weather monitoring for outdoor activities Automated weather alerts for specific locations Personal weather assistant for travel planning Prerequisites Before setting up this workflow, ensure you have: An OpenWeatherMap API account (free tier available) A Telegram bot token Your Telegram chat ID n8n instance (cloud or self-hosted) Setup Instructions Step 1: Create OpenWeatherMap Account Go to OpenWeatherMap and sign up for a free account Navigate to the API keys section in your account Copy your API key (you'll need this for the workflow configuration) Step 2: Create Telegram Bot Open Telegram and search for @BotFather Start a chat and use the /newbot command Follow the prompts to create your bot and get the bot token Save the bot token securely Step 3: Get Your Telegram Chat ID Start a conversation with your newly created bot Send any message to the bot Visit https://api.telegram.org/bot<YourBOTToken>/getUpdates in your browser Look for your chat ID in the response (it will be a number like 123456789) Step 4: Configure the Workflow Import this workflow into your n8n instance Configure each node with your credentials: Schedule Trigger Node Set your preferred schedule (default: daily at 8:00 AM) Use cron expression format (e.g., 0 8 * * * for 8 AM daily) Get Weather Node Add your OpenWeatherMap credentials Update the cityName parameter to your desired location Format: "CityName,CountryCode" (e.g., "London,UK") Send a text message Node Add your Telegram bot credentials (bot token) Replace XXXXXXX in the chatId field with your actual chat ID Customization Options Location Settings In the "Get Weather" node, modify the cityName parameter to change the location. You can specify: City name only: "Paris" City with country: "Paris,FR" City with state and country: "Miami,FL,US" Schedule Frequency In the "Schedule Trigger" node, adjust the cron expression: Every 6 hours: 0 */6 * * * Twice daily (8 AM & 6 PM): 0 8,18 * * * Weekly on Mondays at 9 AM: 0 9 * * 1 Message Format In the "Format Weather" node, you can customize the message template by modifying the message variable in the function code. Current format includes: Current temperature with "feels like" temperature Min/max temperatures for the day Weather description and precipitation Wind speed and direction Cloud coverage percentage Sunrise and sunset times Language Support In the "Get Weather" node, change the language parameter to get weather descriptions in different languages: English: "en" Spanish: "es" French: "fr" German: "de" Polish: "pl" Troubleshooting Common Issues Weather data not updating: Verify your OpenWeatherMap API key is valid and active Check if you've exceeded your API rate limits Ensure the city name format is correct Messages not being sent: Confirm your Telegram bot token is correct Verify the chat ID is accurate (should be a number, not username) Make sure you've started a conversation with your bot Workflow not triggering: Check if the workflow is activated (toggle switch should be ON) Verify the cron expression syntax is correct Ensure your n8n instance is running continuously Testing the Workflow Use the "Test workflow" button to run manually Check each node's output for errors Verify the final message format in Telegram Node Descriptions Schedule Trigger Automatically starts the workflow based on a cron schedule. Runs at specified intervals to fetch fresh weather data. Get Weather Connects to OpenWeatherMap API to retrieve current weather conditions for the specified location. Format Weather Processes the raw weather data and creates a user-friendly message with emojis and organized information. Send a text message Delivers the formatted weather report to your Telegram chat using the configured bot. Additional Features You can extend this workflow by: Adding weather alerts for specific conditions (temperature thresholds, rain warnings) Including weather forecasts for multiple days Sending reports to multiple chat recipients Adding location-based emoji selection Integrating with other notification channels (email, Slack, Discord) Security Notes Keep your API keys and bot tokens secure Don't share your chat ID publicly Consider using n8n's credential system for storing sensitive information Regularly rotate your API keys for better security Special thanks to Arkadiusz, the only person who supports me in n8n mission to make automation great again.
by WeblineIndia
Facebook Page Negative Review Watchdog โ Slack Escalation + Supabase Ticket This workflow automatically monitors Facebook Page reviews, detects negative feedback (โค 2 stars), alerts the support team via Slack and attempts to create a support record in Supabase with built-in error handling. This workflow listens for Facebook Page reviews through a webhook. When a review with a rating of 2 stars or less is received, the workflow prepares and standardizes the incoming data, sends an immediate Slack alert to the support team and attempts to store the review as a support record in Supabase. If the database operation fails, a fallback Slack alert is triggered with the relevant error details. You receive: Instant Slack alerts for negative Facebook reviews.** Centralized data preparation via a global configuration step.** Automated support record creation.** Error visibility if storage fails.** No manual monitoring of Facebook reviews.** Ideal for customer support teams that want immediate visibility and structured tracking of negative customer feedback. Quick Start โ Implementation Steps Import the provided n8n workflow JSON. Configure the Facebook Page Review Trigger webhook URL in your Facebook integration. Review and adjust the Global Configuration node to match your incoming payload structure. Connect your Slack credentials and select the channel for alerts. Connect your Supabase credentials and configure the table used for storage. Activate the workflow โ monitoring starts instantly. What It Does This workflow automates negative review handling: Receives Facebook Page review data via a webhook. Prepares and standardizes key review fields using a global configuration step. Checks whether the review rating is โค 2 stars. Sends a formatted Slack alert to the support team with full review context. Attempts to create a support record in Supabase. Detects Supabase insert failures. Sends a fallback Slack alert including the Supabase error message if record creation fails. This ensures no negative review is missed, even if downstream storage encounters issues. Whoโs It For This workflow is ideal for: Customer support teams. Social media monitoring teams. SaaS companies handling public feedback. Operations teams needing visibility into failures. Product teams tracking user dissatisfaction. Any business receiving Facebook Page reviews. Requirements To run this workflow, you need: n8n instance** (cloud or self-hosted) Facebook Page review integration** (webhook-based) Slack workspace** with API access Supabase project** with insert permissions Basic understanding of JSON payloads and webhooks How It Works Facebook Page Review Trigger Receives new review data via POST webhook. Global Configuration Maps and standardizes incoming review fields such as rating, review text, reviewer name and page name for consistent downstream usage. Check Negative Review (โค 2 Stars) Filters reviews and allows execution only for negative ratings. Slack โ New Negative Review Alert Sends an immediate Slack notification with: Page name Reviewer name Rating Review text Create Support Case (Supabase) Attempts to store the review as a support record. Check Case Creation Failure Verifies whether the Supabase insert returned an error. Slack โ Case Creation Failed Alert Sends a fallback Slack alert including review context and Supabase error details. Setup Steps Import the workflow JSON into n8n. Open Facebook Page Review Trigger and copy the webhook URL. Configure your Facebook system to send review events to this webhook. Review the Global Configuration node and update field mappings if needed. Connect Slack API credentials and select the desired channel. Connect Supabase credentials and configure the target table. Save and activate the workflow. How To Customize Nodes Customize Review Threshold Modify the Check Negative Review (โค 2 Stars) IF node: Change the rating threshold (e.g., โค 1 or โค 3) Add additional conditions such as page name or keywords Customize Slack Alerts You may add: Emojis for urgency Mentions (@channel, @support) Links to the Facebook review Severity labels (LOW / HIGH) Customize Data Storage You can extend stored data with: Review timestamp Review ID Review URL Status (Open / In Progress / Resolved) Assigned support agent Add-Ons (Optional Enhancements) You can extend this workflow to: Prevent duplicate review inserts Add retry logic for storage failures Route alerts to different Slack channels per page Create dashboards from stored review data Integrate ticketing tools (Zendesk, Jira) Add sentiment analysis using AI Generate daily or weekly negative review summaries Use Case Examples 1. Customer Support Monitoring Instant awareness of dissatisfied customers. 2. Social Media Reputation Management No need to manually check Facebook reviews. 3. SLA Enforcement Ensure negative feedback is logged and tracked. 4. Operations Visibility Error alerts ensure failures never go unnoticed. 5. Product Feedback Loop Capture recurring complaints for product improvement. Troubleshooting Guide | Issue | Possible Cause | Solution | |-----|---------------|----------| | Slack alert not sent | Invalid Slack credentials | Reconnect Slack API | | Storage insert fails | Table or permission issue | Verify table and access rules | | Error alert always triggers | Incorrect IF condition | Validate error field mapping | | Workflow not running | Workflow inactive | Activate workflow | | Webhook not firing | Incorrect URL | Re-check webhook configuration | Need Help? If you need help extending or productionizing this workflow like adding retries, scaling alerts, improving error handling or integrating additional systems โ our n8n automation team at WeblineIndia can assist with advanced automation and workflow design.
by Sparsh From Automation Jinn
Automated SEO Data Engine using DataForSEO & Airtable This workflow automatically pulls SERP rankings, competitor keywords, and related keyword ideas from DataForSEO and stores structured results in Airtable โ making SEO tracking and keyword research streamlined and automated. ๐๏ธ What this automation does | Step | Component | Purpose | |------|-----------|---------| | 1 | Trigger (Manual: โExecute workflowโ) | Starts the workflow on demand โ optionally replaceable with a schedule or webhook. | | 2 | Read seed keywords from Airtable (SERP Keywords table) | Fetches the list of keywords for which to track SERP. | | 3 | Post SERP task to DataForSEO API | Requests Google organic SERP results (depth up to 10) for each keyword. | | 4 | Wait + Poll for results (after ~1 min) | Gives DataForSEO time to process, then retrieves the completed task results. | | 5 | Parse & store SERP results into Airtable (SERP rankings table) | Records rank, URL, domain, title, description, breadcrumb, etc. for each result. | | 6 | Read competitor list from Airtable (Competitor Research table) | Fetches competitors (domains/sites) marked for keyword research. | | 7 | Post competitor-site keywords task to DataForSEO | Fetches keywords used by competitor sites. | | 8 | Wait + Poll + Store competitor keywords into Airtable (Competitor Keywords Research) | Captures keyword, competition level, search volume, CPC, monthly volume trends. | | 9 | Aggregate seed keywords โ request related keywords via DataForSEO | Retrieves related / similar keyword ideas for seed list (keyword expansion). | | 10 | Store related keywords into Airtable (Similar Keywords table) | Saves keyword data for long-tail / expansion analysis. | ๐ Key Integrations & Tools n8n** โ Workflow automation and orchestration Airtable** โ Storage for seed keywords, competitor list, and all result tables (SERP Rankings, Competitor Keywords, Similar Keywords) DataForSEO API** โ For SERP data, competitor-site keywords, and related keyword suggestions Core n8n nodes: Trigger, HTTP Request, Wait, Split Out, Aggregate, Airtable (search & create) ๐ Data Output / Stored Fields SERP Rankings type, rank_group, rank_absolute, page, domain, title, description, url, breadcrumb Linked to original seed keyword via SERP Keywords reference Competitor Keywords & Similar Keywords Keyword Competition, Competition_Index Search_Volume, CPC, Low_Top_Of_Page_Bid, High_Top_Of_Page_Bid (if available) Monthly search-volume fields: Jan_2025, Feb_2025, โฆ, Dec_2025 (mapped from API's monthly_searches) For competitor keywords: linked to competitor (company/domain) For similar keywords: linked to seed keyword ๐ Important Notes Month-volume mapping:** Ensure the index mapping from APIโs monthly_searches to months is correct โ wrong indices will mislabel month data. Fixed wait time:** Current 1-minute wait may not always suffice โ for large workloads or slow API responses, increase wait or implement polling/backoff logic. No deduplication:** Running repeatedly may produce duplicate Airtable records. Consider adding search-or-update logic to avoid duplicates. Rate limits / quotas:** Airtable and DataForSEO have limits โ batch carefully, throttle requests or add spacing to avoid hitting limits. Credentials security:** Store Airtable and DataForSEO API credentials securely in n8nโs credentials manager โ avoid embedding tokens directly in workflow JSON. ๐ Why this Workflow is Useful Fully automates SERP tracking and competitor keyword research โ no manual work needed after setup Maintains structured, historical data in Airtable โ ideal for tracking rank changes, discovering competitor moves, and keyword expansion over time Great for SEO teams, agencies, content owners, or anyone needing systematic keyword intelligence and monitoring ๐ Recommended Next Steps Replace manual trigger with a Schedule Trigger (daily/weekly) for automated runs Add deduplication (upsert) logic to prevent duplicate records and keep Airtable clean Improve robustness: add retry logic for API failures, rate-limit handling, and error notifications (Slack / email) Add logging of API response data (task IDs, raw responses) for debugging and audit trails (Optional) Build a reporting dashboard (Airtable Interface / BI tool) to visualise rank trends, keyword growth, and competitor comparisons ๐ Usage / Setup Checklist Configure Airtable base / tables: SERP Keywords, Competitor Research, SERP rankings, Competitor Keywords Research, Similar Keywords. Add credentials in n8n: Airtable API token; DataForSEO API credentials (HTTP Basic / Header auth). Import this workflow JSON into your n8n instance. Update any base/table/field IDs if different. (Optional) Replace Manual Trigger with Schedule Trigger, enable workflow. Run once with a small seed list โ verify outputs, schema, and month-volume mapping. Enable periodic runs and monitor for rate limits or API errors.
by Milan Vasarhelyi - SmoothWork
Video Introduction Want to automate your inbox or need a custom workflow? ๐ Book a Call | ๐ฌ DM me on Linkedin Overview This workflow automates sending personalized SMS messages directly from a Google Sheet using Twilio. Simply update a row's status to "To send" and the workflow automatically sends the text message, then updates the status to "Success" or "Error" based on delivery results. Perfect for event reminders, bulk notifications, appointment confirmations, or any scenario where you need to send customized messages to multiple recipients. Key Features Simple trigger mechanism**: Change the status column to "To send" to queue messages Personalization support**: Use [First Name] and [Last Name] placeholders in message templates Automatic status tracking**: The workflow updates your spreadsheet with delivery results Error handling**: Failed deliveries are clearly marked, making it easy to identify issues like invalid phone numbers Runs every minute**: The workflow polls your sheet continuously when active Setup Instructions Step 1: Copy the Template Spreadsheet Make a copy of the Google Sheets template by going to File โ Make a copy. You must use your own copy so the workflow has permission to update status values. Step 2: Connect Your Accounts Google Sheets: Add your Google account credentials to the 'Monitor Google Sheet for SMS Queue' trigger node Twilio: Sign up for a free Twilio account (trial works for testing). From your Twilio dashboard, get your Account SID, Auth Token, and Twilio phone number, then add these credentials to the 'Send SMS via Twilio' node Step 3: Configure the Workflow In the Config node, update: sheet_url: Paste the URL of your copied Google Sheet from_number: Enter your Twilio phone number (include country code, e.g., +1234567890) Step 4: Activate and Test Activate the workflow using the toggle in the top right corner. Add a row to your sheet with the required information (ID, First Name, Phone Number, Message Template) and set the Status to "To send". Within one minute, the workflow will process the message and update the status accordingly.
by Pauline
This workflow allows you to have a Slack alert when one of your n8n workflows gets an issue. Error trigger**: This node launched the workflow when one of your active workflows gets an issue Slack node**: This node sends you a customized message to alert you and to check the error โ ๏ธ You don't have to activate this workflow for it to be effective
by Elodie Tasia
Create centralized, structured logs directly from your n8n workflows, using Supabase as your scalable log database. Whether you're debugging a workflow, monitoring execution status, or tracking error events, this template makes it easy to log messages in a consistent, structured format inspired by Log4j2 levels (DEBUG, INFO, WARN, ERROR, FATAL). Youโll get a reusable sub-workflow that lets you log any message with optional metadata, tied to a workflow execution and a specific node. What this template does Provides a sub-workflow that inserts log entries into Supabase. Each log entry supports the following fields: workflow_name: Your n8n workflow identifier node_name: last executed node execution_id: n8n execution ID for correlation log_level: One of DEBUG, INFO, WARN, ERROR, FATAL message: Textual message for the log metadata: Optional JSON metadata (flexible format) Comes with examples for diffrerent log levels: Easily call the sub-workflow from any step with a Execute Workflow node and pass dynamic parameters. Use Cases Debug complex workflows without relying on internal n8n logs. Catch and trace errors with contextual metadata. Integrate logs into external dashboards or monitoring tools via Supabase SQL or APIs. Analyze logs by level, time, or workflow. Requirements To use this template, you'll need: A Supabase project with: A log_level_type enum A logs table matching the expected structure A service role key or supabase credentials available in n8n. The table shema and SQL scripts are given in the template file. How to Use This Template Clone the sub-workflow into your n8n instance. Set up Supabase credentials (in the Supabase node). Call the sub-workflow using the Execute Workflow node. Provide input values like: { "workflow_name": "sync_crm_to_airtable", "execution_id": {{$execution.id}}, "node_name": "Airtable Insert", "log_level": "INFO", "message": "New contact pushed to Airtable successfully", "metadata": { "recordId": "rec123", "fields": ["email", "firstName"] } } Repeat anywhere you need to log custom events.
by Joel Sage
How it works This workflow is designed to take user inputs in order to generate an image using the Riverflow 2.0 model through the Replicate API. It can handle both image generation as well as image editing. Additionally, for specific text modifications, the source text and font can be provided to help guide the model. Here is a high-level overview of how this workflow operates: An initial input form acts as the trigger for this workflow. It takes in all the parameters required by Riverflow 2.0, including: Model Font URLs + Texts Resolution Initial Image Instruction (Required) Aspect Ratio Transparency Enhance Prompt Max Iterations Safety Checker Super Resolution References Number of Outputs These inputs are then sanitized using a script and passed into a sub-workflow that handles the HTTP requests. Sub-Workflow: This workflow takes all the given parameters and makes a POST request to start the Riverflow 2.0 generation, followed by a looped GET request to check once the status is complete. The outputs are then stored in a data table, which is very important. This is done in a sub-workflow so that images can be generated in parallel when multiple outputs are required. The data table allows communication between the parent workflow and the parallel-running sub-workflows. In the parent workflow, we poll the data table to check if outputs from the sub-workflow have been generated. This is done using a loop and querying the data table to see if the outputs have been inserted. Once we have the outputs (as URLs), the raw image is also provided so that the image can be viewed or downloaded directly from the workflow (the URLs are also given if they need to be used). Setup This workflow requires a Replicate API key, which can be obtained from https://replicate.com/. Additionally, a n8n data table is needed to store outputs from parallel processes. Ideal for E-commerce Brands**: Riverflow 2.0 is heavily used for product and packaging designs and images and was developed by Sourceful. Using this workflow will help brands create images for e-commerce purposes. Marketers**: Teams can generate multiple creative variations in parallel for paid ads, social media, email campaigns, and A/B testing. Instead of manually editing assets, marketers can automate background changes, color variations, and creative iterations to rapidly test and scale winning campaigns. Design and Creative Teams**: Creative teams can use this workflow to quickly prototype visual concepts, enhance product renders, upscale images, and generate multiple design directions in minutes. It reduces repetitive production work and allows designers to focus on higher-level creative decisions. Customization Connect to External Systems**: Outputs from this workflow can be passed to an external database or system such as Shopify or Google Drive. Post-processing Automation**: This workflow can be extended into a larger image automation pipeline. Generated assets can automatically undergo resizing, background removal, format conversion, watermarking, compression optimization, or thumbnail generation, ensuring images are production-ready without manual intervention. Ad Monitoring and Reports**: Generated images can feed directly into marketing workflows (e.g., paid ads or social campaigns). Performance metrics such as CPM, CTR, and conversion rate can be tracked and logged, enabling teams to measure creative effectiveness and iterate on prompts or variations based on real performance data. More Infomation about Riverflow 2.0: www.riverflow.ai
by Caio Carvalho
This workflow automatically collects historical price data from Polymarket Up/Down markets and stores it in Supabase, creating a structured and query-ready dataset for analysis. By continuously fetching price movements from prediction markets such as Bitcoin, S&P 500, and other Up/Down series, the automation enables reliable historical tracking without manual intervention. The data is normalized and persisted in Supabase, making it easy to run SQL queries, build dashboards, perform quantitative analysis, or integrate with analytics and AI pipelines. This setup is ideal for traders, data analysts, and developers who need accurate Polymarket price history for research, strategy testing, or long-term market monitoring How it works Provide the slug of the serial market you want to analyze in the initial form. The workflow is designed for live Polymarket โUp or Downโ markets; copy the slug from the current market and submit it. From the slug, the workflow extracts the event ID and derives the corresponding series ID. All events in that series are retrieved, organized, and stored in an internal n8n table. This process runs only once per series to avoid duplicate data. A second workflow reads the stored events, retrieves the Up and Down token IDs, records market closing times, and filters out open markets so only historical data is processed. Market end times are converted to Unix timestamps, and start times are calculated. The workflow is structured for 1-hour markets, but the time-conversion node can be adjusted for other durations. Historical price data is then fetched and stored in a Supabase table for later analysis. How to use Copy the slug of the desired Up or Down market from Polymarket and submit it through the form. After the first workflow finishes, run the second workflow to collect historical price data. Events are processed in batches of 100 to avoid performance issues. If execution stops, simply rerun the workflow and it will continue from where it left off. Need help with implementation or customization? If you require assistance setting up this workflow, adapting it to your infrastructure, or extending it for advanced analytics, feel free to reach out by email at caio@caravelsai.com .
by Amir Safavi-Naini
LLM Cost Monitor & Usage Tracker for n8n ๐ฏ What This Workflow Does This workflow provides comprehensive monitoring and cost tracking for all LLM/AI agent usage across your n8n workflows. It extracts detailed token usage data from any workflow execution and calculates precise costs based on current model pricing. The Problem It Solves When running LLM nodes in n8n workflows, the token usage and intermediate data are not directly accessible within the same workflow. This monitoring workflow bridges that gap by: Retrieving execution data using the execution ID Extracting all LLM usage from any nested structure Calculating costs with customizable pricing Providing detailed analytics per node and model WARNING: it works after the full execution of the workflow (i.e. you can't get this data before completion of all tasks in the workflow) โ๏ธ Setup Instructions Prerequisites Experience Required: Basic familiarity with n8n LLM nodes and AI agents Agent Configuration: In your monitored workflows, go to agent settings and enable "Return Intermediate Steps" For getting execution data, you need to set upthe n8n API in your instance (also available onthe free version) Installation Steps Import this monitoring workflow into your n8n instance Go to Settings >> select n8n API from left bar >> define an API. Now you can add this as the credential for your "Get an Execution" node Configure your model name mappings in the "Standardize Names" node Update model pricing in the "Model Prices" node (prices per 1M tokens) To monitor a workflow: Add an "Execute Workflow" node at the end of your target workflow Select this monitoring workflow Important: Turn OFF "Wait For Sub-Workflow Completion" Pass the execution ID as input ๐ง Customization When You See Errors If the workflow enters the error path, it means an undefined model was detected. Simply: Add the model name to the standardize_names_dic Add its pricing to the model_price_dic Re-run the workflow Configurable Elements Model Name Mapping**: Standardize different model name variations (e.g., "gpt-4-0613" โ "gpt-4") Pricing Dictionary**: Set costs per million tokens for input/output Extraction Depth**: Captures tokens from any nesting level automatically ๐ Output Data Per LLM Call Cost Breakdown**: Prompt, completion, and total costs in USD Token Metrics**: Prompt tokens, completion tokens, total tokens Performance**: Execution time, start time, finish reason Content Preview**: First 100 chars of input/output for debugging Model Parameters**: Temperature, max tokens, timeout, retry count Execution Context**: Workflow name, node name, execution status Flow Tracking**: Previous nodes chain Summary Statistics Total executions and costs Breakdown by model type Breakdown by node Average cost per call Total execution time โจ Key Benefits No External Dependencies**: Everything runs within n8n Universal Compatibility**: Works with any workflow structure Automatic Detection**: Finds LLM usage regardless of nesting Real-time Monitoring**: Track costs as workflows execute Debugging Support**: Preview actual prompts and responses Scalable**: Handles multiple models and complex workflows ๐ Example Use Cases Cost Optimization**: Identify expensive nodes and optimize prompts Usage Analytics**: Track token consumption across teams/projects Budget Monitoring**: Set alerts based on cost thresholds Performance Analysis**: Find slow-running LLM calls Debugging**: Review actual inputs/outputs without logs Compliance**: Audit AI usage across your organization ๐ Quick Start Import workflow Update model prices (if needed) Add monitoring to any workflow with the Execute Workflow node View detailed cost breakdowns instantly Note: Prices are configured per million tokens. Default includes GPT-4, GPT-3.5, Claude, and other popular models. Add custom models as needed.
by Samir Saci
Tags*: Supply Chain, Inventory Management, ABC Analysis, Pareto Principle, Demand Variability, Automation, Google Sheets Context Hi! Iโm Samir โ a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies optimise inventory and logistics operations by combining data analytics and workflow automation. This workflow is part of our inventory optimisation toolkit, allowing businesses to perform ABC classification and Pareto analysis directly from their transactional sales data. > Automate inventory segmentation with n8n! ๐ฌ For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is designed for supply chain analysts, demand planners, or inventory managers who want to: Identify their top-performing items (Pareto 80/20 principle) Classify products into ABC categories based on sales contribution Evaluate demand variability (XYZ classification support) Imagine you have a Google Sheet where daily sales transactions are stored: The workflow aggregates sales by item, calculates cumulative contribution, and assigns A, B, or C classes. It also computes mean, standard deviation, and coefficient of variation (CV) to highlight demand volatility. How does it work? This workflow automates the process of ABC & Pareto analysis from raw sales data: ๐ Google Sheets input provides daily transactional sales ๐งฎ Aggregation & code nodes compute sales, turnover, and cumulative shares ๐ง ABC class mapping assigns items into A/B/C buckets ๐ Demand variability metrics (XYZ) are calculated ๐ Results are appended into dedicated Google Sheets tabs for reporting ๐ฅ Watch My Tutorial Steps: ๐ Load daily sales records from Google Sheets ๐ Filter out items with zero sales ๐ Aggregate sales by store, item, and day ๐ Perform Pareto analysis to calculate cumulative turnover share ๐งฎ Compute demand variability (mean, stdev, CV) ๐ง Assign ABC classes based on cumulative share thresholds ๐ฅ Append results into ABC XYZ and Pareto output sheets What do I need to get started? Youโll need: A Google Sheet with sales transactions (date, item, quantity, turnover) that is available here: Test Sheet A Google Sheets account connected in n8n Basic knowledge of inventory analysis (ABC/XYZ) Next Steps ๐๏ธ Use the sticky notes in the n8n canvas to: Add your Google Sheets credentials Replace the Sheet ID with your own sales dataset Run the workflow and check the output tabs: ABC XYZ, Pareto, and Store Sales This template was built using n8n v1.107.3 Submitted: September 15, 2025