by PiAPI
What does the workflow do? This workflow is primarily designed to generate animated illustrations for content creators and social media professionals with Midjourney (unoffcial) and Kling (unofficial) API served by PiAPI. PiAPI is an API platform which provides professional API service. With service provided by PiAPI, users could generate a fantastic animated artwork simply using workflow on n8n without complex settings among various AI models. What is animated illustration? An animated illustration is a digitally enhanced artwork that combines traditional illustration styles with subtle, purposeful motion to enrich storytelling while preserving its original artistic essence. Who is this workflow for? Social Media Content Creators: Produces animated illustrations for social media posts. Digital Marketers: Generates marketing materials with motion graphics. Independent Content Producers: Creates animated content without specialized animation skills. Step-by-step Setting Instructions To simplify workflow settings, usually users just need to change basic prompt of the image and the motion of the final video following the instrution below: Sign in your PiAPI account and get your X-API-Key. Fill in your X-API-Key of PiAPI account in Midjourney and Kling nodes. Enter your desired image prompt in the Prompt node. Enter the motion prompt in Kling Video Generator node. For more complex or customization settings, users could also add more nodes to get more output images and generate more videos. Also, they could change the target image to gain a better result. As for recommendation, users could change the video models for which we would recommend live-wallpaper LoRA of Wanx. Users could check API doc to see more use cases of video models and image models for best practice. Use Case Input Prompt A gentle girl and a fluffy rabbit explore a sunlit forest together, playing by a sparkling stream. Butterflies flutter around them as golden sunlight filters through green leaves. Warm and peaceful atmosphere, 4K nature documentary style. --s 500 --sref 4028286908 --niji 6 Output Video When there is troubleshooting Check if the X-API-Key has been filled in nodes needed. Check your task status in Task History in PiAPI to get more details about task status. More Generation Case for Reference
by darrell_tw
Water Reminder Workflow This workflow demonstrates how to use n8n and Slack to build an intelligent water drinking reminder system, combined with Google Sheets for data recording and OpenAI for generating personalized reminder messages. Google Sheet Template The iOS shortcut template: The result in iOS health: The template demo in Youtube Key Features Scheduled Reminders: Automatically sends water reminders at random times every hour. Intelligent Scheduling: Delays the next reminder if you've recently had water. AI-Generated Messages: Uses OpenAI to generate friendly and non-repetitive reminder messages. Data Tracking: Records daily water intake and calculates percentage of goal achievement. Quick Response: Easily record water intake through Slack buttons. iOS Integration: Provides iOS shortcut links to sync data with the Health app. Pre-Configuration Requirements To use this workflow, you need to set up the following: Google Sheets: Create a Google spreadsheet with log and setting sheets The log sheet should include date, time, and value columns The setting sheet is used to store daily water intake goals Slack: Create a Slack app and obtain an API token Configure permissions for interactive buttons OpenAI: Obtain an OpenAI API key iOS Shortcut (optional): Create an iOS shortcut named darrell_water for recording health data Node Configurations 1. Scheduled Triggers and Data Collection 1.1. Schedule Trigger Purpose**: Triggers water reminders on schedule Configuration**: Cron Expression: 0 {{ Math.floor(Math.random() * 11) }} 8-23 * * * Triggers at a random minute every hour, only between 8 AM and 11 PM 1.2. Google Sheets - Get Target Purpose**: Retrieves daily water intake goal Configuration**: Document ID: Your Google spreadsheet ID Sheet Name: setting 1.3. Google Sheets - Get Log Purpose**: Retrieves today's water intake records Configuration**: Document ID: Your Google spreadsheet ID Sheet Name: log Filter Condition: date equals today's date {{ $now.format('yyyy-MM-dd') }} 1.4. Summarize Purpose**: Calculates total water intake for today Configuration**: Fields to Summarize: value (sum) 1.5. Limit Purpose**: Gets the most recent water intake record Configuration**: Keep: Last items 2. Intelligent Reminder Logic 2.1. Combine Data Purpose**: Merges target and actual water intake data Configuration**: Combine By: Combine by position Number of Inputs: 3 2.2. If Purpose**: Checks if water was consumed recently Configuration**: Condition: {{ DateTime.fromISO($json.date+"T"+$json.time).format('yyyy-MM-dd HH:mm:ss') }} is after {{ $now.minus(30, "minutes") }} 2.3. Wait Purpose**: Randomly delays the reminder if water was consumed recently Configuration**: Wait Time: {{ Math.floor(Math.random() * 1) + 1 }} minutes 3. AI Message Generation and Sending 3.1. OpenAI Purpose**: Generates personalized water reminder messages Configuration**: Model: gpt-4o-mini Messages: System prompt: Requests responses in Traditional Chinese and in JSON format User prompt: Includes information about last water time, current time, goal, and progress Temperature: 1 3.2. Slack Send Drink Notification Purpose**: Sends water reminders to Slack channel Configuration**: Channel: Your Slack channel ID Message Type: Block Block UI: Contains AI-generated reminder message and water amount buttons (100ml, 150ml, 200ml, 250ml, 300ml) 4. User Interaction and Data Recording 4.1. Slack Drink Webhook Purpose**: Receives user interactions when water buttons are clicked Configuration**: HTTP Method: POST Path: slack-water-webhook 4.2. Slack Action Payload Purpose**: Parses Slack interaction data Configuration**: Mode: Raw JSON Output: {{ $json.body.payload }} 4.3. Slack Action Drink Data Purpose**: Extracts water amount and message information Configuration**: Assignments: value: {{ $json.actions[0].value }} message_text: {{ $json.message.text }} shortcut_url: shortcuts://run-shortcut?name=darrell_water&input= shortcut_url_data: JSON containing water amount and time message_ts: {{ $json.container.message_ts }} 4.4. Google Sheets Purpose**: Records water intake data to spreadsheet Configuration**: Operation: Append Document ID: Your Google spreadsheet ID Sheet Name: log Column Mapping: date: {{ $now.format('yyyy-MM-dd') }} time: {{ $now.format('HH:mm:ss') }} value: {{ $json.value }} 4.5. Send to Slack with Confirm Purpose**: Sends confirmation message and provides iOS shortcut link Configuration**: Channel: Your Slack channel ID Message Type: Block Block UI: Contains confirmation message and iOS Health app button Reply Settings: Reply to the thread of the original message Author Information This workflow was created by darrell_tw_, an engineer focused on AI and Automation. Contact: X Threads Instagram Website
by vinci-king-01
Product Price Monitor with Pushover and Baserow ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes multiple e-commerce sites for selected products, analyzes weekly pricing trends, stores historical data in Baserow, and sends an instant Pushover notification when significant price changes occur. It is ideal for retailers who need to track seasonal fluctuations and optimize inventory or pricing strategies. Pre-conditions/Requirements Prerequisites An active n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed At least one publicly accessible webhook URL (for on-demand runs) A Baserow database with a table prepared for product data Pushover account and registered application Required Credentials ScrapeGraphAI API Key** – Enables web-scraping capabilities Baserow: Personal API Token** – Allows read/write access to your table Pushover: User Key & API Token** – Sends mobile/desktop push notifications (Optional) HTTP Basic Token or API Keys for any private e-commerce endpoints you plan to monitor Baserow Table Specification | Field Name | Type | Description | |------------|-----------|--------------------------| | Product ID | Number | Internal or SKU | | Name | Text | Product title | | URL | URL | Product page | | Price | Number | Current price (float) | | Currency | Single select (USD, EUR, etc.) | | Last Seen | Date/Time | Last price check | | Trend | Number | 7-day % change | How it works This workflow automatically scrapes multiple e-commerce sites for selected products, analyzes weekly pricing trends, stores historical data in Baserow, and sends an instant Pushover notification when significant price changes occur. It is ideal for retailers who need to track seasonal fluctuations and optimize inventory or pricing strategies. Key Steps: Webhook Trigger**: Manually or externally trigger the weekly price-check run. Set Node**: Define an array of product URLs and metadata. Split In Batches**: Process products one at a time to avoid rate limits. ScrapeGraphAI Node**: Extract current price, title, and availability from each URL. If Node**: Determine if price has changed > ±5 % since last entry. HTTP Request (Trend API)**: Retrieve seasonal trend scores (optional). Merge Node**: Combine scrape data with trend analysis. Baserow Nodes**: Upsert latest record and fetch historical data for comparison. Pushover Node**: Send alert when significant price movement detected. Sticky Notes**: Documentation and inline comments for maintainability. Set up steps Setup Time: 15-25 minutes Install Community Node: In n8n, go to “Settings → Community Nodes” and install ScrapeGraphAI. Create Baserow Table: Match the field structure shown above. Obtain Credentials: ScrapeGraphAI API key from your dashboard Baserow personal token (/account/settings) Pushover user key & API token Clone Workflow: Import this template into n8n. Configure Credentials in Nodes: Open each ScrapeGraphAI, Baserow, and Pushover node and select/enter the appropriate credential. Add Product URLs: Open the first Set node and replace the example array with your actual product list. Adjust Thresholds: In the If node, change the 5 value if you want a higher/lower alert threshold. Test Run: Execute the workflow manually; verify Baserow rows and the Pushover notification. Schedule: Add a Cron trigger or external scheduler to run weekly. Node Descriptions Core Workflow Nodes: Webhook** – Entry point for manual or API-based triggers. Set** – Holds the array of product URLs and meta fields. SplitInBatches** – Iterates through each product to prevent request spikes. ScrapeGraphAI** – Scrapes price, title, and currency from product pages. If** – Compares new price vs. previous price in Baserow. HTTP Request** – Calls a trend API (e.g., Google Trends) to get seasonal score. Merge** – Combines scraping results with trend data. Baserow (Upsert & Read)** – Writes fresh data and fetches historical price for comparison. Pushover** – Sends formatted push notification with price delta. StickyNote** – Documents purpose and hints within the workflow. Data Flow: Webhook → Set → SplitInBatches → ScrapeGraphAI ScrapeGraphAI → If True branch → HTTP Request → Merge → Baserow Upsert → Pushover False branch → Baserow Upsert Customization Examples Change Notification Channel to Slack // Replace the Pushover node with Slack { "channel": "#pricing-alerts", "text": 🚨 ${$json["Name"]} changed by ${$json["delta"]}% – now ${$json["Price"]} ${$json["Currency"]} } Additional Data Enrichment (Stock Status) // Add to ScrapeGraphAI's selector map { "stock": { "selector": ".availability span", "type": "text" } } Data Output Format The workflow outputs structured JSON data: { "ProductID": 12345, "Name": "Winter Jacket", "URL": "https://shop.example.com/winter-jacket", "Price": 79.99, "Currency": "USD", "LastSeen": "2024-11-20T10:34:18.000Z", "Trend": 12, "delta": -7.5 } Troubleshooting Common Issues Empty scrape result – Check if the product page changed its HTML structure; update CSS selectors in ScrapeGraphAI. Baserow “Row not found” errors – Ensure Product ID or another unique field is set as the primary key for upsert. Performance Tips Limit batch size to 5-10 URLs to avoid IP blocking. Use n8n’s built-in proxy settings if scraping sites with geo-restrictions. Pro Tips: Store historical JSON responses in a separate Baserow table for deeper analytics. Standardize currency symbols to avoid false change detections. Couple this workflow with an n8n Dashboard to visualize price trends in real-time.
by Garri
Description This workflow is an n8n-based automation that allows users to download TikTok/Reels videos without watermarks simply by sending the video link through a Telegram Bot. It uses a Telegram Trigger to receive the link from the user, then makes an HTTP request to a third-party API (tiktokio.com) to process and retrieve the download link. The workflow filters the results to find the Download without watermark link, downloads the video in MP4 format, and sends it back to the user directly in their Telegram chat. Key features: Supports the best available video quality (bestvideo+bestaudio). Automatically removes watermarks. Instant response directly in Telegram chat. Fully automated — no manual downloads required. How It Works Telegram Trigger The user sends a TikTok or Reels link to the Telegram bot. The workflow captures and stores the link for processing. HTTP Request – MediaDL API The link is sent via POST method to https://mediadl.app/api/download. The API processes the link and returns video file data. Wait Delay The workflow waits a few seconds to ensure the API response is fully ready. Edit Fields Extracts the video file URL from the API response. Additional Wait Delay Adds a short pause to avoid connection errors during the download process. HTTP Request – Proxy Download Downloads the MP4 video file directly from the filtered URL. Send Video via Telegram The downloaded video is sent back to the user in their Telegram chat. How to Set Up Create & Configure a Telegram Bot Open Telegram and search for BotFather. Send /newbot → choose a name & username for your bot. Copy the Bot Token provided — you’ll need it in n8n. Prepare Your n8n Environment Log in to your n8n instance (self-hosted or n8n Cloud). Go to Credentials → create new Telegram API credentials using your Bot Token. Import the Workflow In n8n, click Import and select the PROJECT_DOWNLOAD_TIKTOK_REELS.json file. Configure the Telegram Nodes In the Telegram Trigger and Send Video nodes, connect your Telegram API credentials. Configure the HTTP Request Nodes Ensure the Download2 and HTTP Request nodes have the correct URL and headers (pre-configured for mediadl.app). Make sure the responseFormat is set to file in the final download node. Activate the Workflow Toggle Activate in the top right corner of n8n. Test by sending a TikTok or Reels link to your bot — you should receive the no-watermark video in return.
by Friedemann Schuetz
Welcome to my VEO3 Video Generator Workflow! This automated workflow transforms simple text descriptions into professional 8-second videos using Google's cutting-edge VEO3 AI model. Users submit video ideas through a web form, and the system automatically generates optimized prompts, creates high-quality videos with native audio, and delivers them via Google Drive - all powered by Claude 4 Sonnet for intelligent prompt optimization. This workflow has the following sequence: VEO3 Generator Form - Web form interface for users to input video content, format, and duration Video Prompt Generator - AI agent powered by Claude 4 Sonnet that: Analyzes user input for video content requirements Creates factual, professional video titles Generates detailed VEO3 prompts with subject, context, action, style, camera motion, composition, ambiance, and audio elements Optimizes prompts for 16:9 landscape format and 8-second duration Create VEO3 Video - Submits the optimized prompt to fal.ai VEO3 API for video generation Wait 30 seconds - Initial waiting period for video processing to begin Check VEO3 Status - Monitors the video generation status via fal.ai API Video completed? - Decision node that checks if video generation is finished If not completed: Returns to wait cycle If completed: Proceeds to video retrieval Get VEO3 Video URL - Retrieves the final video download URL from fal.ai Download VEO3 Video - Downloads the generated MP4 video file Merge - Combines video data with metadata for final processing Save Video to Google Drive - Uploads the video to specified Google Drive folder Video Output - Displays completion message with Google Drive link to user The following accesses are required for the workflow: Anthropic API** (Claude 4 Sonnet): Documentation Fal.ai API** (VEO3 Model): Create API key at https://fal.ai/dashboard/keys Google Drive API**: Documentation Workflow Features: User-friendly web form**: Simple interface for video content input AI-powered prompt optimization**: Claude 4 Sonnet creates professional VEO3 prompts Automatic video generation**: Leverages Google's VEO3 model via fal.ai Status monitoring**: Real-time tracking of video generation progress Google Drive integration**: Automatic upload and sharing of generated videos Structured output**: Consistent video titles and professional prompt formatting Audio optimization**: VEO3's native audio generation with ambient sounds and music Current Limitations: Format**: Only 16:9 landscape videos supported Duration**: Only 8-second videos supported Processing time**: Videos typically take 60-120 seconds to generate Use Cases: Content creation**: Generate videos for social media, websites, and presentations Marketing materials**: Create promotional videos and advertisements Educational content**: Produce instructional and explanatory videos Prototyping**: Rapid video concept development and testing Creative projects**: Artistic and experimental video generation Business presentations**: Professional video content for meetings and pitches Feel free to contact me via LinkedIn, if you have any questions!
by vinci-king-01
Lead Scoring Pipeline with Mattermost and Trello This workflow automatically enriches incoming form-based leads, calculates a lead-score from multiple data points, and then routes high-value prospects to a Mattermost alert channel while adding all leads to Trello for further handling. It centralizes lead intelligence and streamlines sales team triage—no manual spreadsheet work required. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed Active Trello and Mattermost workspaces Lead-capture form or webhook that delivers JSON payloads Required Credentials Trello API Key & Token** – Access to the board/list where cards will be created Mattermost Access Token** – Permission to post messages in the target channel (Optional) Clearbit / Apollo / 3rd-party enrichment keys** – If you replace the sample enrichment HTTP requests Specific Setup Requirements | Variable | Purpose | Example Value | |-------------------------|-------------------------------------------|------------------------| | MM_CHANNEL_ID | Mattermost channel to post high-score leads | leads-alerts | | TRELLO_BOARD_ID | Board where new cards are added | 62f1d… | | TRELLO_LIST_ID_HOT | Trello list for hot leads | Hot Deals | | TRELLO_LIST_ID_BACKLOG| Trello list for all other leads | New Leads | | LEAD_SCORE_THRESHOLD | Score above which a lead is considered hot| 70 | How it works This workflow grabs new leads at a defined interval, enriches each lead with external data, computes a custom score, and routes the lead: high-scorers trigger a Mattermost alert and are placed in a “Hot Deals” list, while the rest are stored in a “Backlog” list on Trello. All actions are fully automated and run unattended once configured. Key Steps: Schedule Trigger**: Runs every 15 minutes to poll for new form submissions. HTTP Request – Fetch Leads**: Retrieves the latest unprocessed leads from your form backend or CRM API. Split In Batches**: Processes leads 20 at a time to respect API rate limits. HTTP Request – Enrich Lead**: Calls external enrichment (e.g., Clearbit) to append company and person data. Code – Calculate Score**: JavaScript that applies weightings to enriched attributes and outputs a numeric score. IF – Score Threshold**: Branches flow based on LEAD_SCORE_THRESHOLD. Mattermost Node**: Sends a rich-text message with lead details for high-score prospects. Trello Node (Hot List)**: Creates a Trello card in the “Hot Deals” list for high-value leads. Trello Node (Backlog)**: Creates a Trello card in the “New Leads” list for everyone else. Merge & Flag Processed**: Marks leads as processed to avoid re-processing in future runs. Set up steps Setup Time: 10–15 minutes Import the Workflow: Download the JSON template and import it into n8n. Create / Select Credentials: Add your Trello API key & token under Trello API credentials. Add your Mattermost personal access token under Mattermost API credentials. Configure Environment Variables: Set MM_CHANNEL_ID, TRELLO_BOARD_ID, TRELLO_LIST_ID_HOT, TRELLO_LIST_ID_BACKLOG, and LEAD_SCORE_THRESHOLD in n8n → Settings → Environment. Form Backend Endpoint: Update the first HTTP Request node with the correct URL and authentication for your form or CRM. (Optional) Enrichment Provider: Replace the sample enrichment HTTP Request with your chosen provider’s endpoint and credentials. Test Run: Execute the workflow manually with a sample payload to ensure Trello cards and Mattermost messages are produced. Activate: Enable the workflow; it will now run on the defined schedule. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Triggers workflow every 15 minutes. HTTP Request (Fetch Leads)** – Pulls unprocessed leads. SplitInBatches** – Limits processing to 20 leads per batch. HTTP Request (Enrich Lead)** – Adds firmographic & technographic data. Code (Calculate Score)** – JavaScript scoring algorithm; outputs score field. IF (Score ≥ Threshold)** – Determines routing path. Mattermost** – Sends formatted message with lead summary & score. Trello (Create Card)** – Adds lead as a card to the appropriate list. Merge (Flag Processed)** – Updates source system to mark lead as processed. Data Flow: Schedule Trigger → HTTP Request (Fetch Leads) → SplitInBatches → HTTP Request (Enrich Lead) → Code (Calculate Score) → IF IF (Yes) → Mattermost → Trello (Hot List) IF (No) → Trello (Backlog) Both branches → Merge (Flag Processed) Customization Examples Adjust Scoring Weights // Code node: adjust weights to change scoring logic const weights = { industry: 15, companySize: 25, jobTitle: 20, intentSignals: 40 }; Dynamic Trello List Mapping // Use a Lookup table instead of IF node const mapping = { hot: 'TRELLO_LIST_ID_HOT', cold: 'TRELLO_LIST_ID_BACKLOG' }; items[0].json.listId = mapping[items[0].json.segment]; return items; Data Output Format The workflow outputs structured JSON data: { "leadId": "12345", "email": "jane.doe@example.com", "score": 82, "priority": "hot", "trelloCardUrl": "https://trello.com/c/abc123", "mattermostPostId": "78yzk9n8ppgkkp" } Troubleshooting Common Issues Trello authentication fails – Ensure the token has write access and that the API key & token pair belong to the same Trello account. Mattermost message not sent – Confirm the token can post in the target channel and that MM_CHANNEL_ID is correct. Performance Tips Batch leads in groups of 20–50 to avoid enrichment API rate-limit errors. Cache enrichment responses for repeat domains to reduce API calls. Pro Tips: Add a second IF node to send ultra-high (>90) scores directly to an account executive via email. Store raw enrichment responses in a database for future analytics. Use n8n’s built-in Execution Data Save to debug edge-cases without rerunning external API calls.
by Daniel Shashko
How it Works This workflow accepts meeting transcripts via webhook (Zoom, Google Meet, Teams, Otter.ai, or manual notes), immediately processing them through an intelligent pipeline that eliminates post-meeting admin work. The system parses multiple input formats (JSON, form data, transcription outputs), extracting meeting metadata including title, date, attendees, transcript content, duration, and recording URLs. OpenAI analyzes the transcript to extract eight critical dimensions: executive summary, key decisions with ownership, action items with assigned owners and due dates, discussion topics, open questions, next steps, risks/blockers, and follow-up meeting requirements—all returned as structured JSON. The intelligence engine enriches each action item with unique IDs, priority scores (weighing urgency + owner assignment + due date), status initialization, and meeting context links, then calculates a completeness score (0-100) that penalizes missing owners and undefined deadlines. Multi-channel distribution ensures visibility: Slack receives formatted summaries with emoji categorization for decisions (✅), action items (🎯) with priority badges and owner assignments, and completeness scores (📊). Notion gets dual-database updates—meeting notes with formatted decisions and individual task cards in your action item database with full filtering and kanban capabilities. Task owners receive personalized HTML emails with priority color-coding and meeting context, while Google Calendar creates due-date reminders as calendar events. Every meeting logs to Google Sheets for analytics tracking: attendee count, duration, action items created, priority distribution, decision count, completeness score, and follow-up indicators. The workflow returns a JSON response confirming successful processing with meeting ID, action item count, and executive summary. The entire pipeline executes in 8-12 seconds from submission to full distribution. Who is this for? Product and engineering teams drowning in scattered action items across tools Remote-first companies where verbal commitments vanish after calls Executive teams needing auditable decision records without dedicated note-takers Startups juggling 10+ meetings daily without time for manual follow-up Operations teams tracking cross-functional initiatives requiring accountability Setup Steps Setup time:** 25-35 minutes Requirements:** OpenAI API key, Slack workspace, Notion account, Google Workspace (Calendar/Gmail/Sheets), optional transcription service Webhook Trigger: Automatically generates URL, configure as POST endpoint accepting JSON with title, date, attendees, transcript, duration, recording_url, organizer Transcription Integration: Connect Otter.ai/Fireflies.ai/Zoom webhooks, or create manual submission form OpenAI Analysis: Add API credentials, configure GPT-4 or GPT-3.5-turbo, temperature 0.3, max tokens 1500 Intelligence Synthesis: JavaScript calculates priority scores (0-40 range) and completeness metrics (0-100), customize thresholds Slack Integration: Create app with chat:write scope, get bot token, replace channel ID placeholder with your #meeting-summaries channel Notion Databases: Create "Meeting Notes" database (title, date, attendees, summary, action items, completeness, recording URL) and "Action Items" database (title, assigned to, due date, priority, status, meeting relation), share both with integration, add token Email Notifications: Configure Gmail OAuth2 or SMTP, customize HTML template with company branding Calendar Reminders: Enable Calendar API, creates events on due dates at 9 AM (adjustable), adds task owner as attendee Analytics Tracking: Create Google Sheet with columns for Meeting_ID, Title, Date, Attendees, Duration, Action_Items, High_Priority, Decisions, Completeness, Unassigned_Tasks, Follow_Up_Needed Test: POST sample transcript, verify Slack message, Notion entries, emails, calendar events, and Sheets logging Customization Guidance Meeting Types:** Daily standups (reduce tokens to 500, Slack-only), sprint planning (add Jira integration), client calls (add CRM logging), executive reviews (stricter completeness thresholds) Priority Scoring:** Add urgency multiplier for <48hr due dates, owner seniority weights, customer impact flags AI Prompt:** Customize to emphasize deadlines, blockers, or technical decisions; add date parsing for phrases like "by end of week" Notification Routing:** Critical priority (score >30) → Slack DM + email, High (20-30) → channel + email, Medium/Low → email only Tool Integrations:** Add Jira/Linear for ticket creation, Asana/Monday for project management, Salesforce/HubSpot for CRM logging, GitHub for issue creation Analytics:** Build dashboards for meeting effectiveness scores, action item velocity, recurring topic clustering, team productivity metrics Cost Optimization:** ~1,200 tokens/meeting × $0.002/1K (GPT-3.5) = $0.0024/meeting, use batch API for 50% discount, cache common patterns Once configured, this workflow becomes your team's institutional memory—capturing every commitment and decision while eliminating hours of weekly admin work, ensuring accountability is automatic and follow-through is guaranteed. Built by Daniel Shashko Connect on LinkedIn
by explorium
Research Agent - Automated Sales Meeting Intelligence This n8n workflow automatically prepares comprehensive sales research briefs every morning for your upcoming meetings by analyzing both the companies you're meeting with and the individual attendees. The workflow connects to your calendar, identifies external meetings, enriches companies and contacts with deep intelligence from Explorium, and delivers personalized research reports—giving your sales team everything they need for informed, confident conversations. DEMO Template Demo Credentials Required To use this workflow, set up the following credentials in your n8n environment: Google Calendar (or Outlook) Type:** OAuth2 Used for:** Reading daily meeting schedules and identifying external attendees Alternative: Microsoft Outlook Calendar Get credentials at Google Cloud Console Explorium API Type:** Generic Header Auth Header:** Authorization Value:** Bearer YOUR_API_KEY Used for:** Business/prospect matching, firmographic enrichment, professional profiles, LinkedIn posts, website changes, competitive intelligence Get your API key at Explorium Dashboard Explorium MCP Type:** HTTP Header Auth Used for:** Real-time company intelligence and supplemental research for AI agents Connect to: https://mcp.explorium.ai/mcp Anthropic API Type:** API Key Used for:** AI-powered company and attendee research analysis Get your API key at Anthropic Console Slack (or preferred output) Type:** OAuth2 Used for:** Delivering research briefs Alternative options: Google Docs, Email, Microsoft Teams, CRM updates Go to Settings → Credentials, create these credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: Schedule Trigger Automatically runs the workflow on a recurring schedule. Type:** Schedule Trigger Default:** Every morning before business hours Customizable:** Set to any interval (hourly, daily, weekly) or specific times Alternative Trigger Options: Manual Trigger:** On-demand execution Webhook:** Triggered by calendar events or CRM updates Node 2: Get many events Retrieves meetings from your connected calendar. Calendar Source:** Google Calendar (or Outlook) Authentication:** OAuth2 Time Range:** Current day + 18 hours (configurable via timeMax) Returns:** All calendar events with attendee information, meeting titles, times, and descriptions Node 3: Filter for External Meetings Identifies meetings with external participants and filters out internal-only meetings. Filtering Logic: Extracts attendee email domains Excludes your company domain (e.g., 'explorium.ai') Excludes calendar system addresses (e.g., 'resource.calendar.google.com') Only passes events with at least one external attendee Important Setup Note: Replace 'explorium.ai' in the code node with your company domain to properly filter internal meetings. Output: Events with external participants only external_attendees: Array of external contact emails company_domains: Unique list of external company domains per meeting external_attendee_count: Number of external participants Company Research Pipeline Node 4: Loop Over Items Iterates through each meeting with external attendees for company research. Node 5: Extract External Company Domains Creates a deduplicated list of all external company domains from the current meeting. Node 6: Explorium API: Match Business Matches company domains to Explorium's business entity database. Method:** POST Endpoint:** /v1/businesses/match Authentication:** Header Auth (Bearer token) Returns: business_id: Unique Explorium identifier matched_businesses: Array of matches with confidence scores Company name and basic info Node 7: If Validates that a business match was found before proceeding to enrichment. Condition:** business_id is not empty If True:** Proceed to parallel enrichment nodes If False:** Skip to next company in loop Nodes 8-9: Parallel Company Enrichment Node 8: Explorium API: Business Enrich Endpoints:** /v1/businesses/firmographics/enrich, /v1/businesses/technographics/enrich Enrichment Types:** firmographics, technographics Returns:** Company name, description, website, industry, employees, revenue, headquarters location, ticker symbol, LinkedIn profile, logo, full tech stack, nested tech stack by category, BI & analytics tools, sales tools, marketing tools Node 9: Explorium API: Fetch Business Events Endpoint:** /v1/businesses/events/fetch Event Types:** New funding rounds, new investments, mergers & acquisitions, new products, new partnerships Date Range:** September 1, 2025 - November 4, 2025 Returns:** Recent business milestones and financial events Node 10: Merge Combines enrichment responses and events data into a single data object. Node 11: Cleans Merge Data Output Transforms merged enrichment data into a structured format for AI analysis. Node 12: Company Research Agent AI agent (Claude Sonnet 4) that analyzes company data to generate actionable sales intelligence. Input: Structured company profile with all enrichment data Analysis Focus: Company overview and business context Recent website changes and strategic shifts Tech stack and product focus areas Potential pain points and challenges How Explorium's capabilities align with their needs Timely conversation starters based on recent activity Connected to Explorium MCP: Can pull additional real-time intelligence if needed to create more detailed analysis Node 13: Create Company Research Output Formats the AI analysis into a readable, shareable research brief. Attendee Research Pipeline Node 14: Create List of All External Attendees Compiles all unique external attendee emails across all meetings. Node 15: Loop Over Items2 Iterates through each external attendee for individual enrichment. Node 16: Extract External Company Domains1 Extracts the company domain from each attendee's email. Node 17: Explorium API: Match Business1 Matches the attendee's company domain to get business_id for prospect matching. Method:** POST Endpoint:** /v1/businesses/match Purpose:** Link attendee to their company Node 18: Explorium API: Match Prospect Matches attendee email to Explorium's professional profile database. Method:** POST Endpoint:** /v1/prospects/match Authentication:** Header Auth (Bearer token) Returns: prospect_id: Unique professional profile identifier Node 19: If1 Validates that a prospect match was found. Condition:** prospect_id is not empty If True:** Proceed to prospect enrichment If False:** Skip to next attendee Node 20: Explorium API: Prospect Enrich Enriches matched prospect using multiple Explorium endpoints. Enrichment Types:** contacts, profiles, linkedin_posts Endpoints:** /v1/prospects/contacts/enrich, /v1/prospects/profiles/enrich, /v1/prospects/linkedin_posts/enrich Returns: Contacts:** Professional email, email status, all emails, mobile phone, all phone numbers Profiles:** Full professional history, current role, skills, education, company information, experience timeline, job titles and seniority LinkedIn Posts:** Recent LinkedIn activity, post content, engagement metrics, professional interests and thought leadership Node 21: Cleans Enrichment Outputs Structures prospect data for AI analysis. Node 22: Attendee Research Agent AI agent (Claude Sonnet 4) that analyzes prospect data to generate personalized conversation intelligence. Input: Structured professional profile with activity data Analysis Focus: Career background and progression Current role and responsibilities Recent LinkedIn activity themes and interests Potential pain points in their role Relevant Explorium capabilities for their needs Personal connection points (education, interests, previous companies) Opening conversation starters Connected to Explorium MCP: Can gather additional company or market context if needed Node 23: Create Attendee Research Output Formats attendee analysis into a readable brief with clear sections. Node 24: Merge2 Combines company research output with attendee information for final assembly. Node 25: Loop Over Items1 Manages the final loop that combines company and attendee research for output. Node 26: Send a message (Slack) Delivers combined research briefs to specified Slack channel or user. Alternative Output Options: Google Docs:** Create formatted document per meeting Email:** Send to meeting organizer or sales rep Microsoft Teams:** Post to channels or DMs CRM:** Update opportunity/account records with research PDF:** Generate downloadable research reports Workflow Flow Summary Schedule: Workflow runs automatically every morning Fetch Calendar: Pull today's meetings from Google Calendar/Outlook Filter: Identify meetings with external attendees only Extract Companies: Get unique company domains from external attendees Extract Attendees: Compile list of all external contacts Company Research Path: Match Companies: Identify businesses in Explorium database Enrich (Parallel): Pull firmographics, website changes, competitive landscape, events, and challenges Merge & Clean: Combine and structure company data AI Analysis: Generate company research brief with insights and talking points Format: Create readable company research output Attendee Research Path: Match Prospects: Link attendees to professional profiles Enrich (Parallel): Pull profiles, job changes, and LinkedIn activity Merge & Clean: Combine and structure prospect data AI Analysis: Generate attendee research with background and approach Format: Create readable attendee research output Delivery: Combine: Merge company and attendee research for each meeting Send: Deliver complete research briefs to Slack/preferred platform This workflow eliminates manual pre-meeting research by automatically preparing comprehensive intelligence on both companies and individuals—giving sales teams the context and confidence they need for every conversation. Customization Options Calendar Integration Works with multiple calendar platforms: Google Calendar:** Full OAuth2 integration Microsoft Outlook:** Calendar API support CalDAV:** Generic calendar protocol support Trigger Flexibility Adjust when research runs: Morning Routine:** Default daily at 7 AM On-Demand:** Manual trigger for specific meetings Continuous:** Hourly checks for new meetings Enrichment Depth Add or remove enrichment endpoints: Company:** Technographics, funding history, news mentions, hiring signals Prospects:** Contact information, social profiles, company changes Customizable:** Select only needed data to optimize speed and costs Research Scope Configure what gets researched: All External Meetings:** Default behavior Filtered by Keywords:** Only meetings with specific titles By Attendee Count:** Only meetings with X+ external attendees By Calendar:** Specific calendars only Output Destinations Deliver research to your preferred platform: Messaging:** Slack, Microsoft Teams, Discord Documents:** Google Docs, Notion, Confluence Email:** Gmail, Outlook, custom SMTP CRM:** Salesforce, HubSpot (update account notes) Project Management:** Asana, Monday.com, ClickUp AI Model Options Swap AI providers based on needs: Default: Anthropic Claude (Sonnet 4) Alternatives: OpenAI GPT-4, Google Gemini Setup Notes Domain Configuration: Replace 'explorium.ai' in the Filter for External Meetings code node with your company domain Calendar Connection: Ensure OAuth2 credentials have calendar read permissions Explorium Credentials: Both API key and MCP credentials must be configured Output Timing: Schedule trigger should run with enough lead time before first meetings Rate Limits: Adjust loop batch sizes if hitting API rate limits during enrichment Slack Configuration: Select destination channel or user for research delivery Data Privacy: Research is based on publicly available professional information and company data This workflow acts as your automated sales researcher, preparing detailed intelligence reports every morning so your team walks into every meeting informed, prepared, and ready to have meaningful conversations that drive business forward.
by Monisha Panda
Description This n8n template demonstrates how to build an AI-powered Market Research Assistant using a multi-agent workflow. It helps you get a 360-degree view of a product idea or research topic by analysing: Customer insights and pain points Market size and macro/micro-economic trends Competitive landscape and alternatives The workflow mirrors how product managers and strategy teams conduct discovery — by breaking down research into parallel workstreams and then synthesizing insights into a single narrative. How it works Planner Agent The main agent receives your research topic as input and defines: Research objective Key areas of focus (Customer, Market, Competition) Assumptions and constraints Parallel Research Agents Based on the planner’s output, three specialist agents run in parallel: Customer Insights Agent Researches public sources such as articles and forums to infer customer behaviour, pain points, and existing tools. Market Scan Agent Analyses macro-economic and micro-economic trends, estimates TAM/SAM/SOM, and highlights key risks and assumptions. Competitor Insights Agent Identifies existing competitors and substitutes and summarises how they are positioned in the market. Synthesis Agent The outputs from all research agents are consolidated and analysed by a synthesis agent, which produces a market discovery memo. Final Output The discovery memo is generated as a document and sent to your email. How to use Trigger the workflow via the chat message node. Provide your research topic or product idea, along with optional context such as target market. The workflow runs automatically and delivers a structured discovery memo to your inbox. Setup Steps API credentials for: Groq or OpenAI (LLM) Documentero (document generation) A configured Documentero template Gmail OAuth or email credentials for delivery of memo
by Wan Dinie
AI Content Generator with Auto Pexels Image Matching This n8n template demonstrates how to use AI to generate engaging content and automatically find matching royalty-free images based on the content context. Use cases are many: Try creating blog posts with hero images, generating social media content with visuals or drafting email newsletters with relevant photos. Good to know At time of writing, Pexels offers free API access with up to 200 requests per hour. See Pexels API for updated info. OpenAI API costs vary by model. GPT-4.1 mini is cheaper while normal GPT-4.1 and above offer deeper content generation but cost more per request. Using the floating JavaScript node can reduce token usage by processing content and keyword extraction in a single prompt. How it works We'll collect a content topic or idea via a manual form trigger. OpenAI generates initial content based on your input topic. The AI extracts suitable keywords from the generated content to find matching images. The keywords are sent to Pexels API, which searches for relevant royalty-free stock images. OpenAI creates the final polished content that complements the selected image. The result is displayed as a formatted HTML template combining content and image together. How to use The manual trigger node is used as an example, but feel free to replace this with other triggers such as webhook or even a form. You can batch-generate multiple pieces of content by looping through a list, but of course, the processing will take longer and cost more. Requirements OpenAI API key (get one at https://platform.openai.com) Pexels API key (get free access at https://www.pexels.com/api) Valid content topics or ideas to generate from Customizing this workflow Optimize token usage**: Connect the floating "Extract Content and Image Keyword" JavaScript node to process everything in one prompt and minimize API costs. If you use this option, update the expressions in the "Pexels Image Search" node and "Create Suitable Content Including Image" node to reference the extracted keywords from the JS node. Upgrade to GPT-4.1, GPT-5.1, or GPT-5.2 for more advanced and creative content generation. Change the HTML template output to other formats like Markdown, plain text, or JSON for different publishing platforms. For long term, store the results in a database like Supabase or Google Sheets if you are planning to reuse the contents.
by Anthony
How It Works This workflow transforms any webpage into an AI-narrated audio summary delivered via WhatsApp: Receive URL - WhatsApp Trigger captures incoming messages and passes them to URL extraction Extract & validate - Code node extracts URLs using regex and validates format; IF node checks for errors User feedback - Sends either error message ("Please send valid URL") or processing status ("Fetching and analyzing... 10-30 seconds") Fetch webpage - Sub-workflow calls Jina AI Reader (https://r.jina.ai/) to fetch JavaScript-rendered content, bypassing bot blocks Summarize content - GPT-4o-mini processes webpage text in 6000-character chunks, extracts title and generates concise summary Generate audio - OpenAI TTS-1 converts summary text to natural-sounding audio (Opus format for WhatsApp compatibility) Deliver result - WhatsApp node sends audio message back to user with summary Why Jina AI? Many modern websites (like digibyte.io) require JavaScript to load content. Standard HTTP requests only fetch the initial HTML skeleton ("JavaScript must be enabled"). Jina AI executes JavaScript and returns clean, readable text. Setup Steps Time estimate: ~20-25 minutes 1. WhatsApp Business API Setup (10-15 minutes) Create Meta Developer App** - Go to https://developers.facebook.com/, create Business app, add WhatsApp product Get Phone Number ID** - Use Meta's test number or register your own business phone Generate System User Token** - Create at https://business.facebook.com/latest/settings/system_users (permanent token, no 4-hour expiry) Configure Webhook** - Point to your n8n instance URL, subscribe to "messages" events Verify business** - Meta requires 3-5 verification steps (business, app, phone, system user) 2. Configure n8n Credentials (5 minutes) OpenAI** - Add API key in Credentials → OpenAI (used in 2 places: "Convert Summary to Audio" and "OpenAI Chat Model" in sub-workflow) WhatsApp OAuth** - Add in WhatsApp Trigger node using System User token from step 1 WhatsApp API** - Add in all WhatsApp action nodes (Send Error, Send Processing, Send Audio) using same token 3. Link Sub-Workflow (3 minutes) Ensure "[SUB] Get Webpage Summary" workflow is activated In "Get Webpage Summary" node, select the sub-workflow from dropdown Verify workflow ID matches: QglZjvjdZ16BisPN 4. Update Phone Number IDs (2 minutes) Copy your Phone Number ID from Meta console Update in all WhatsApp nodes: Send Error Message, Send Processing Message, Send Audio Summary 5. Test the Flow (2 minutes) Activate both workflows (sub-workflow first, then main) Send test message to WhatsApp: https://example.com Verify: Processing message arrives → Audio summary delivered within 30 seconds Important Notes WhatsApp Caveats: 24-hour window** - Can't auto-message users after 24 hours unless they message first (send "Hi" each morning to reset) Verification fatigue** - Meta requires multiple business verifications; budget 30-60 minutes if first time Test vs Production** - Test numbers work for single users; production requires business verification Audio Format: Using Opus codec (optimal for WhatsApp, smaller file size than MP3) Speed set to 1.0 (normal pace) - adjust in "Convert Summary to Audio" node if needed Cost: ~$0.015 per minute of audio generated Jina AI Integration: Free tier** works for basic use (no API key required) Handles JavaScript-heavy sites automatically Add Authorization: Bearer YOUR_KEY header for higher limits Alternative: Replace with Playwright/Puppeteer for self-hosted rendering Cost Breakdown (per summary): GPT-4o-mini summarization: ~$0.005-0.015 OpenAI TTS audio: ~$0.005-0.015 WhatsApp messages: Free (up to 1,000/month) Total: ~$0.01-0.03 per summary** Troubleshooting: "Cannot read properties of undefined" → Status update, not message (code node returns null correctly) "JavaScript must be enabled" → Website needs Jina AI (already implemented in Fetch site texts node) Audio not sending → Check binary data field is named data in TTS node No webhook received → Verify n8n URL is publicly accessible and webhook subscription includes "messages" Pro Tips: Change voice in TTS node: alloy (neutral), echo (male), nova (female), shimmer (soft) Adjust summary length: Modify chunkSize: 6000 in sub-workflow's Text Splitter node (lower = faster but less detailed) Add rate limiting: Insert Code node after trigger to track user requests per hour Store summaries: Add database node after "Clean up" to archive for later retrieval The Use Cases: Executive commuting - Consume industry news hands-free Research students - Cover 3x more sources while multitasking Visually impaired - Access any webpage via natural audio Sales teams - Research prospects on the go Content creators - Monitor competitors while exercising
by Amit Mehta
N8N Workflow: Printify Automation - Update Title and Description - AlexK1919 This workflow automates the process of getting products from Printify, generating new titles and descriptions using OpenAI, and updating those products. How it Works This workflow automatically retrieves a list of products from a Printify store, processes them to generate new titles and descriptions based on brand guidelines and custom instructions, and then updates the products on Printify with the new information. It also interacts with Google Sheets to track the status of the products being processed. The workflow can be triggered both manually or by an update in a Google Sheet. Use Cases E-commerce Automation**: Automating content updates for a Printify store. Marketing & SEO**: Generating SEO-friendly or seasonal content for products using AI. Product Management**: Batch-updating product titles and descriptions without manual effort. Setup Instructions Printify API Credentials: Set up httpHeaderAuth credentials for Printify to allow the workflow to get and update products. OpenAI API Credentials: Provide an API key for OpenAI in the openAiApi credentials. Google Sheets Credentials: The workflow requires two separate Google Sheets credentials: one for the trigger (googleSheetsTriggerOAuth2Api) and another for appending/updating data (googleSheetsOAuth2Api). Google Sheets Setup: You need a Google Sheet to store product information and track the status of the updates. The workflow is linked to a specific spreadsheet. Brand Guidelines: The Brand Guidelines + Custom Instructions node must be updated with your specific brand details and any custom instructions for the AI. Workflow Logic Trigger: The workflow can be triggered manually or by an update in a Google Sheet when the upload column is changed to "yes". Get Product Info: It fetches the shop ID and then a list of all products from Printify. Process Products: The product data is split, and the workflow loops through each product. AI Content Generation: For each product, the Generate Title and Desc node uses OpenAI to create a new title and description based on the original content, brand guidelines, and custom instructions. Google Sheets Update: The workflow appends the product information and a "Product Processing" status to a Google Sheet. It then updates the row with the newly generated title and description, and changes the status to "Option added". Printify Update: The Printify - Update Product node sends a PUT request to the Printify API to apply the new title and description to the product. Node Descriptions | Node Name | Description | |-----------|-------------| | When clicking 'Test workflow' | A manual trigger for testing the workflow. | | Google Sheets Trigger | An automated trigger that starts the workflow when the upload column in the Google Sheet is updated. | | Printify - Get Shops | Fetches the list of shops associated with the Printify account. | | Printify - Get Products | Retrieves all products from the specified Printify shop. | | Brand Guidelines + Custom Instructions | A Set node to store brand guidelines and custom instructions for the AI. | | Generate Title and Desc | An OpenAI node that generates a new title and description based on the provided inputs. | | GS - Add Product Option | Appends a new row to a Google Sheet to track the processing status of a product. | | Update Product Option | Updates an existing row in the Google Sheet with the new product information and status. | | Printify - Update Product | A PUT request to the Printify API to update a product with new information. | Customization Tips You can swap out the Printify API calls with similar services like Printful or Vistaprint. Modify the Brand Guidelines + Custom Instructions node to change the brand name, tone, or specific instructions for the AI. Change the number of options the workflow should generate by modifying the Number of Options node. You can change the OpenAI model used in the Generate Title and Desc node, for example, from gpt-4o-mini to another model. Suggested Sticky Notes for Workflow "Update your Brand Guidelines before running this workflow. You can also add custom instructions for the AI node." "You can swap out the API calls to similar services like Printful, Vistaprint, etc." "Set the Number of Options you'd like for the Title and Description" Required Files 1V1gcK6vyczRqdZC_Printify_Automation_-Update_Title_and_Description-_AlexK1919.json: The main n8n workflow export for this automation. The Google Sheets template for this workflow. Testing Tips Run the workflow with the manual trigger to see the flow from start to finish. Change the upload column in the Google Sheet to "yes" to test the automated trigger. Verify that the new titles and descriptions are correctly updated on Printify. Suggested Tags & Categories Printify OpenAI