by Jitesh Dugar
Customer Support Ticket Documentation Automation Automatically transform resolved support tickets into professional, AI-powered PDF documentation with complete tracking and team notifications. Overview This workflow automates the entire process of documenting resolved support tickets — from receiving ticket data to generating professional PDF case studies, storing them in Google Drive, tracking in spreadsheets, and notifying your team. Powered by AI, it creates consistent, high-quality documentation that can be reused for knowledge base articles, training materials, and compliance records. What This Workflow Does Receives resolved support tickets via webhook from your support platform Extracts and normalizes ticket data (works with Zendesk, Freshdesk, and custom formats) Generates AI-powered summaries using OpenAI GPT-4, creating structured case studies with: Problem description Troubleshooting steps taken Final resolution Key takeaways and prevention tips Creates professional PDF documents with branded HTML templates Uploads PDFs to organized Google Drive folders Tracks all tickets in a Google Sheets database for reporting and analytics Sends Slack notifications to your team with links to completed documentation Handles errors gracefully with automatic alerts when issues occur Key Features Fully Automated:** Zero manual intervention after setup AI-Powered Documentation:** Intelligent summaries that extract insights from raw ticket data Professional Output:** Branded, print-ready PDFs with modern styling Multi-Platform Support:** Works with any support tool that can send webhooks Centralized Tracking:** All tickets logged in Google Sheets for easy reporting Error Handling:** Built-in failure detection with Slack alerts Customizable:** Easy to brand with your company colors, logo, and styling Scalable:** Handles hundreds of tickets per day Use Cases Knowledge Base Building:** Automatically create searchable documentation from real support cases Team Training:** Build a library of resolved issues for onboarding new support agents Compliance & Audit:** Maintain complete records of all customer interactions Performance Analytics:** Track resolution times, common issues, and agent performance Customer Success:** Share professional case studies with stakeholders Process Improvement:** Identify recurring issues and optimize workflows Prerequisites Required Services & Accounts n8n** (self-hosted or cloud) OpenAI Account** with API access PDFMunk Account** (for HTML → PDF conversion) Google Workspace** (for Drive & Sheets) Slack Workspace** (optional but recommended) Support Platform** that can send webhooks (Zendesk, Freshdesk, Intercom, etc.) Required Credentials OpenAI API Key PDFMunk API Key Google Drive OAuth2 credentials Google Sheets OAuth2 credentials Slack Bot Token (OAuth2) Setup Instructions 1. Import the Workflow Copy the workflow JSON. In n8n, click “Import from File” or “Import from Clipboard.” Paste and import. 2. Configure Credentials OpenAI API Get API key from OpenAI Add in n8n: Credentials → OpenAI API → Paste key PDFMunk API Sign up at pdfmunk.com Copy API key → Add in Credentials → HtmlcsstopdfApi Google Drive OAuth2 Create project at Google Cloud Console Enable Drive API Create OAuth 2.0 credentials Add in n8n: Credentials → Google Drive OAuth2 → Connect Google Sheets OAuth2 Enable Google Sheets API in the same project Add in n8n: Credentials → Google Sheets OAuth2 → Connect Slack OAuth2 Create app at Slack API Add scopes: chat:write, chat:write.public Install to workspace Add bot token in Credentials → Slack OAuth2 API 3. Configure Node Settings Google Drive Folder ID Create a folder in Drive for PDFs Copy folder ID from the URL → https://drive.google.com/drive/folders/FOLDER_ID_HERE Paste in the “Upload to Google Drive” node Google Sheets Configuration Create a new sheet titled “Support Ticket Documentation Log.” Add these headers in Row 1: | Ticket ID | Subject | Customer Name | Customer Email | Agent Name | Priority | Category | Resolved Date | Resolution Time | PDF Link | Document Generated | Status | | --------- | ------- | ------------- | -------------- | ---------- | -------- | -------- | ------------- | --------------- | -------- | ------------------ | ------ | Copy Sheet ID from URL → https://docs.google.com/spreadsheets/d/SHEET_ID_HERE/edit Paste it in the “Update Google Sheets” node configuration. Slack Channel ID Right-click your Slack channel → View Channel Details Copy the Channel ID Update it in: “Send Slack Notification” node “Error – PDF Failed” node “Error – Upload Failed” node 4. Configure Webhook in Support Tool Activate the workflow in n8n Copy the Webhook URL from the “Webhook – Receive Ticket” node In Zendesk/Freshdesk: Trigger: “Ticket Status = Resolved” Method: POST Paste the n8n webhook URL Send ticket data as JSON 5. Test the Workflow Click “Execute Workflow” manually Send a test webhook Verify each step completes successfully Check: Generated PDF in Google Drive Row entry in Google Sheets Slack notification delivery How It Works Webhook Trigger → Receives resolved ticket Data Extraction → Normalizes ticket fields AI Summarization (OpenAI) → Generates structured summary HTML Formatting → Styles and adds branding PDF Conversion (PDFMunk) → Converts HTML → PDF Google Drive Upload → Saves and returns shareable link Sheets Logging → Appends ticket info + PDF link Slack Notification → Notifies team with summary Error Handling → Detects and reports failures Result → Clean, documented ticket case study in minutes Customization Branding Update company name, logo URL, and color scheme in the “Format HTML” node. Default color: #4CAF50 → Replace with your brand color. AI Prompt Customization Modify “AI Summarization (OpenAI)” node to add: Industry-specific terms Additional sections or insights Different summary tone or length Notification Customization Add @mentions or emojis in Slack messages for better visibility. Data Flow Webhook → Extract Data → AI Summary → Format HTML → Convert to PDF ↓ Download PDF → Upload to Drive → Log in Sheets → Notify Team ↓ Error Handling (if any) Expected Output PDF Document Includes: Branded header with company name/logo Resolution time badge Ticket metadata (ID, priority, customer, agent, etc.) Full AI-generated case study Professional footer with timestamp Google Sheets Entry: All ticket info Resolution metrics Direct PDF link Status = “Generated” Slack Notification: Summary of ticket Clickable PDF link Timestamp Performance Processing Time:** 10–20 seconds/ticket Capacity:** 100+ tickets/day PDF Size:** 50–300 KB Troubleshooting Webhook not triggering → Check webhook URL, trigger conditions, and public access. PDF generation fails → Verify HTML syntax and PDFMunk API key. Google Drive upload fails → Re-authenticate credentials or check folder permissions. Slack notification missing → Ensure bot token, scopes, and channel ID are valid. Data extraction errors → Adjust field mappings or inspect payload format. Best Practices Test before production rollout Monitor first-week error logs Organize Drive by date/priority Validate Sheets columns Use a dedicated Slack channel Archive old PDFs regularly Review AI summaries weekly Document configuration changes Security Notes All credentials stored securely in n8n PDF links are restricted by Drive sharing settings Webhooks use HTTPS for secure data transfer No sensitive info logged in error messages Future Enhancements Multi-language summaries Integration with Confluence or Notion Customer satisfaction feedback link ML-based issue categorization Analytics dashboard Weekly email digests Public knowledge base generation Support Resources n8n Documentation n8n Community OpenAI API Docs PDFMunk Support Google Drive API Slack API Docs License This workflow template is provided as-is for use with n8n under the MIT License.
by Friedemann Schuetz
Welcome to my VEO3 Video Generator Workflow! This automated workflow transforms simple text descriptions into professional 8-second videos using Google's cutting-edge VEO3 AI model. Users submit video ideas through a web form, and the system automatically generates optimized prompts, creates high-quality videos with native audio, and delivers them via Google Drive - all powered by Claude 4 Sonnet for intelligent prompt optimization. This workflow has the following sequence: VEO3 Generator Form - Web form interface for users to input video content, format, and duration Video Prompt Generator - AI agent powered by Claude 4 Sonnet that: Analyzes user input for video content requirements Creates factual, professional video titles Generates detailed VEO3 prompts with subject, context, action, style, camera motion, composition, ambiance, and audio elements Optimizes prompts for 16:9 landscape format and 8-second duration Create VEO3 Video - Submits the optimized prompt to fal.ai VEO3 API for video generation Wait 30 seconds - Initial waiting period for video processing to begin Check VEO3 Status - Monitors the video generation status via fal.ai API Video completed? - Decision node that checks if video generation is finished If not completed: Returns to wait cycle If completed: Proceeds to video retrieval Get VEO3 Video URL - Retrieves the final video download URL from fal.ai Download VEO3 Video - Downloads the generated MP4 video file Merge - Combines video data with metadata for final processing Save Video to Google Drive - Uploads the video to specified Google Drive folder Video Output - Displays completion message with Google Drive link to user The following accesses are required for the workflow: Anthropic API** (Claude 4 Sonnet): Documentation Fal.ai API** (VEO3 Model): Create API key at https://fal.ai/dashboard/keys Google Drive API**: Documentation Workflow Features: User-friendly web form**: Simple interface for video content input AI-powered prompt optimization**: Claude 4 Sonnet creates professional VEO3 prompts Automatic video generation**: Leverages Google's VEO3 model via fal.ai Status monitoring**: Real-time tracking of video generation progress Google Drive integration**: Automatic upload and sharing of generated videos Structured output**: Consistent video titles and professional prompt formatting Audio optimization**: VEO3's native audio generation with ambient sounds and music Current Limitations: Format**: Only 16:9 landscape videos supported Duration**: Only 8-second videos supported Processing time**: Videos typically take 60-120 seconds to generate Use Cases: Content creation**: Generate videos for social media, websites, and presentations Marketing materials**: Create promotional videos and advertisements Educational content**: Produce instructional and explanatory videos Prototyping**: Rapid video concept development and testing Creative projects**: Artistic and experimental video generation Business presentations**: Professional video content for meetings and pitches Feel free to contact me via LinkedIn, if you have any questions!
by Daniel Shashko
How it Works This workflow accepts meeting transcripts via webhook (Zoom, Google Meet, Teams, Otter.ai, or manual notes), immediately processing them through an intelligent pipeline that eliminates post-meeting admin work. The system parses multiple input formats (JSON, form data, transcription outputs), extracting meeting metadata including title, date, attendees, transcript content, duration, and recording URLs. OpenAI analyzes the transcript to extract eight critical dimensions: executive summary, key decisions with ownership, action items with assigned owners and due dates, discussion topics, open questions, next steps, risks/blockers, and follow-up meeting requirements—all returned as structured JSON. The intelligence engine enriches each action item with unique IDs, priority scores (weighing urgency + owner assignment + due date), status initialization, and meeting context links, then calculates a completeness score (0-100) that penalizes missing owners and undefined deadlines. Multi-channel distribution ensures visibility: Slack receives formatted summaries with emoji categorization for decisions (✅), action items (🎯) with priority badges and owner assignments, and completeness scores (📊). Notion gets dual-database updates—meeting notes with formatted decisions and individual task cards in your action item database with full filtering and kanban capabilities. Task owners receive personalized HTML emails with priority color-coding and meeting context, while Google Calendar creates due-date reminders as calendar events. Every meeting logs to Google Sheets for analytics tracking: attendee count, duration, action items created, priority distribution, decision count, completeness score, and follow-up indicators. The workflow returns a JSON response confirming successful processing with meeting ID, action item count, and executive summary. The entire pipeline executes in 8-12 seconds from submission to full distribution. Who is this for? Product and engineering teams drowning in scattered action items across tools Remote-first companies where verbal commitments vanish after calls Executive teams needing auditable decision records without dedicated note-takers Startups juggling 10+ meetings daily without time for manual follow-up Operations teams tracking cross-functional initiatives requiring accountability Setup Steps Setup time:** 25-35 minutes Requirements:** OpenAI API key, Slack workspace, Notion account, Google Workspace (Calendar/Gmail/Sheets), optional transcription service Webhook Trigger: Automatically generates URL, configure as POST endpoint accepting JSON with title, date, attendees, transcript, duration, recording_url, organizer Transcription Integration: Connect Otter.ai/Fireflies.ai/Zoom webhooks, or create manual submission form OpenAI Analysis: Add API credentials, configure GPT-4 or GPT-3.5-turbo, temperature 0.3, max tokens 1500 Intelligence Synthesis: JavaScript calculates priority scores (0-40 range) and completeness metrics (0-100), customize thresholds Slack Integration: Create app with chat:write scope, get bot token, replace channel ID placeholder with your #meeting-summaries channel Notion Databases: Create "Meeting Notes" database (title, date, attendees, summary, action items, completeness, recording URL) and "Action Items" database (title, assigned to, due date, priority, status, meeting relation), share both with integration, add token Email Notifications: Configure Gmail OAuth2 or SMTP, customize HTML template with company branding Calendar Reminders: Enable Calendar API, creates events on due dates at 9 AM (adjustable), adds task owner as attendee Analytics Tracking: Create Google Sheet with columns for Meeting_ID, Title, Date, Attendees, Duration, Action_Items, High_Priority, Decisions, Completeness, Unassigned_Tasks, Follow_Up_Needed Test: POST sample transcript, verify Slack message, Notion entries, emails, calendar events, and Sheets logging Customization Guidance Meeting Types:** Daily standups (reduce tokens to 500, Slack-only), sprint planning (add Jira integration), client calls (add CRM logging), executive reviews (stricter completeness thresholds) Priority Scoring:** Add urgency multiplier for <48hr due dates, owner seniority weights, customer impact flags AI Prompt:** Customize to emphasize deadlines, blockers, or technical decisions; add date parsing for phrases like "by end of week" Notification Routing:** Critical priority (score >30) → Slack DM + email, High (20-30) → channel + email, Medium/Low → email only Tool Integrations:** Add Jira/Linear for ticket creation, Asana/Monday for project management, Salesforce/HubSpot for CRM logging, GitHub for issue creation Analytics:** Build dashboards for meeting effectiveness scores, action item velocity, recurring topic clustering, team productivity metrics Cost Optimization:** ~1,200 tokens/meeting × $0.002/1K (GPT-3.5) = $0.0024/meeting, use batch API for 50% discount, cache common patterns Once configured, this workflow becomes your team's institutional memory—capturing every commitment and decision while eliminating hours of weekly admin work, ensuring accountability is automatic and follow-through is guaranteed. Built by Daniel Shashko Connect on LinkedIn
by explorium
Research Agent - Automated Sales Meeting Intelligence This n8n workflow automatically prepares comprehensive sales research briefs every morning for your upcoming meetings by analyzing both the companies you're meeting with and the individual attendees. The workflow connects to your calendar, identifies external meetings, enriches companies and contacts with deep intelligence from Explorium, and delivers personalized research reports—giving your sales team everything they need for informed, confident conversations. DEMO Template Demo Credentials Required To use this workflow, set up the following credentials in your n8n environment: Google Calendar (or Outlook) Type:** OAuth2 Used for:** Reading daily meeting schedules and identifying external attendees Alternative: Microsoft Outlook Calendar Get credentials at Google Cloud Console Explorium API Type:** Generic Header Auth Header:** Authorization Value:** Bearer YOUR_API_KEY Used for:** Business/prospect matching, firmographic enrichment, professional profiles, LinkedIn posts, website changes, competitive intelligence Get your API key at Explorium Dashboard Explorium MCP Type:** HTTP Header Auth Used for:** Real-time company intelligence and supplemental research for AI agents Connect to: https://mcp.explorium.ai/mcp Anthropic API Type:** API Key Used for:** AI-powered company and attendee research analysis Get your API key at Anthropic Console Slack (or preferred output) Type:** OAuth2 Used for:** Delivering research briefs Alternative options: Google Docs, Email, Microsoft Teams, CRM updates Go to Settings → Credentials, create these credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: Schedule Trigger Automatically runs the workflow on a recurring schedule. Type:** Schedule Trigger Default:** Every morning before business hours Customizable:** Set to any interval (hourly, daily, weekly) or specific times Alternative Trigger Options: Manual Trigger:** On-demand execution Webhook:** Triggered by calendar events or CRM updates Node 2: Get many events Retrieves meetings from your connected calendar. Calendar Source:** Google Calendar (or Outlook) Authentication:** OAuth2 Time Range:** Current day + 18 hours (configurable via timeMax) Returns:** All calendar events with attendee information, meeting titles, times, and descriptions Node 3: Filter for External Meetings Identifies meetings with external participants and filters out internal-only meetings. Filtering Logic: Extracts attendee email domains Excludes your company domain (e.g., 'explorium.ai') Excludes calendar system addresses (e.g., 'resource.calendar.google.com') Only passes events with at least one external attendee Important Setup Note: Replace 'explorium.ai' in the code node with your company domain to properly filter internal meetings. Output: Events with external participants only external_attendees: Array of external contact emails company_domains: Unique list of external company domains per meeting external_attendee_count: Number of external participants Company Research Pipeline Node 4: Loop Over Items Iterates through each meeting with external attendees for company research. Node 5: Extract External Company Domains Creates a deduplicated list of all external company domains from the current meeting. Node 6: Explorium API: Match Business Matches company domains to Explorium's business entity database. Method:** POST Endpoint:** /v1/businesses/match Authentication:** Header Auth (Bearer token) Returns: business_id: Unique Explorium identifier matched_businesses: Array of matches with confidence scores Company name and basic info Node 7: If Validates that a business match was found before proceeding to enrichment. Condition:** business_id is not empty If True:** Proceed to parallel enrichment nodes If False:** Skip to next company in loop Nodes 8-9: Parallel Company Enrichment Node 8: Explorium API: Business Enrich Endpoints:** /v1/businesses/firmographics/enrich, /v1/businesses/technographics/enrich Enrichment Types:** firmographics, technographics Returns:** Company name, description, website, industry, employees, revenue, headquarters location, ticker symbol, LinkedIn profile, logo, full tech stack, nested tech stack by category, BI & analytics tools, sales tools, marketing tools Node 9: Explorium API: Fetch Business Events Endpoint:** /v1/businesses/events/fetch Event Types:** New funding rounds, new investments, mergers & acquisitions, new products, new partnerships Date Range:** September 1, 2025 - November 4, 2025 Returns:** Recent business milestones and financial events Node 10: Merge Combines enrichment responses and events data into a single data object. Node 11: Cleans Merge Data Output Transforms merged enrichment data into a structured format for AI analysis. Node 12: Company Research Agent AI agent (Claude Sonnet 4) that analyzes company data to generate actionable sales intelligence. Input: Structured company profile with all enrichment data Analysis Focus: Company overview and business context Recent website changes and strategic shifts Tech stack and product focus areas Potential pain points and challenges How Explorium's capabilities align with their needs Timely conversation starters based on recent activity Connected to Explorium MCP: Can pull additional real-time intelligence if needed to create more detailed analysis Node 13: Create Company Research Output Formats the AI analysis into a readable, shareable research brief. Attendee Research Pipeline Node 14: Create List of All External Attendees Compiles all unique external attendee emails across all meetings. Node 15: Loop Over Items2 Iterates through each external attendee for individual enrichment. Node 16: Extract External Company Domains1 Extracts the company domain from each attendee's email. Node 17: Explorium API: Match Business1 Matches the attendee's company domain to get business_id for prospect matching. Method:** POST Endpoint:** /v1/businesses/match Purpose:** Link attendee to their company Node 18: Explorium API: Match Prospect Matches attendee email to Explorium's professional profile database. Method:** POST Endpoint:** /v1/prospects/match Authentication:** Header Auth (Bearer token) Returns: prospect_id: Unique professional profile identifier Node 19: If1 Validates that a prospect match was found. Condition:** prospect_id is not empty If True:** Proceed to prospect enrichment If False:** Skip to next attendee Node 20: Explorium API: Prospect Enrich Enriches matched prospect using multiple Explorium endpoints. Enrichment Types:** contacts, profiles, linkedin_posts Endpoints:** /v1/prospects/contacts/enrich, /v1/prospects/profiles/enrich, /v1/prospects/linkedin_posts/enrich Returns: Contacts:** Professional email, email status, all emails, mobile phone, all phone numbers Profiles:** Full professional history, current role, skills, education, company information, experience timeline, job titles and seniority LinkedIn Posts:** Recent LinkedIn activity, post content, engagement metrics, professional interests and thought leadership Node 21: Cleans Enrichment Outputs Structures prospect data for AI analysis. Node 22: Attendee Research Agent AI agent (Claude Sonnet 4) that analyzes prospect data to generate personalized conversation intelligence. Input: Structured professional profile with activity data Analysis Focus: Career background and progression Current role and responsibilities Recent LinkedIn activity themes and interests Potential pain points in their role Relevant Explorium capabilities for their needs Personal connection points (education, interests, previous companies) Opening conversation starters Connected to Explorium MCP: Can gather additional company or market context if needed Node 23: Create Attendee Research Output Formats attendee analysis into a readable brief with clear sections. Node 24: Merge2 Combines company research output with attendee information for final assembly. Node 25: Loop Over Items1 Manages the final loop that combines company and attendee research for output. Node 26: Send a message (Slack) Delivers combined research briefs to specified Slack channel or user. Alternative Output Options: Google Docs:** Create formatted document per meeting Email:** Send to meeting organizer or sales rep Microsoft Teams:** Post to channels or DMs CRM:** Update opportunity/account records with research PDF:** Generate downloadable research reports Workflow Flow Summary Schedule: Workflow runs automatically every morning Fetch Calendar: Pull today's meetings from Google Calendar/Outlook Filter: Identify meetings with external attendees only Extract Companies: Get unique company domains from external attendees Extract Attendees: Compile list of all external contacts Company Research Path: Match Companies: Identify businesses in Explorium database Enrich (Parallel): Pull firmographics, website changes, competitive landscape, events, and challenges Merge & Clean: Combine and structure company data AI Analysis: Generate company research brief with insights and talking points Format: Create readable company research output Attendee Research Path: Match Prospects: Link attendees to professional profiles Enrich (Parallel): Pull profiles, job changes, and LinkedIn activity Merge & Clean: Combine and structure prospect data AI Analysis: Generate attendee research with background and approach Format: Create readable attendee research output Delivery: Combine: Merge company and attendee research for each meeting Send: Deliver complete research briefs to Slack/preferred platform This workflow eliminates manual pre-meeting research by automatically preparing comprehensive intelligence on both companies and individuals—giving sales teams the context and confidence they need for every conversation. Customization Options Calendar Integration Works with multiple calendar platforms: Google Calendar:** Full OAuth2 integration Microsoft Outlook:** Calendar API support CalDAV:** Generic calendar protocol support Trigger Flexibility Adjust when research runs: Morning Routine:** Default daily at 7 AM On-Demand:** Manual trigger for specific meetings Continuous:** Hourly checks for new meetings Enrichment Depth Add or remove enrichment endpoints: Company:** Technographics, funding history, news mentions, hiring signals Prospects:** Contact information, social profiles, company changes Customizable:** Select only needed data to optimize speed and costs Research Scope Configure what gets researched: All External Meetings:** Default behavior Filtered by Keywords:** Only meetings with specific titles By Attendee Count:** Only meetings with X+ external attendees By Calendar:** Specific calendars only Output Destinations Deliver research to your preferred platform: Messaging:** Slack, Microsoft Teams, Discord Documents:** Google Docs, Notion, Confluence Email:** Gmail, Outlook, custom SMTP CRM:** Salesforce, HubSpot (update account notes) Project Management:** Asana, Monday.com, ClickUp AI Model Options Swap AI providers based on needs: Default: Anthropic Claude (Sonnet 4) Alternatives: OpenAI GPT-4, Google Gemini Setup Notes Domain Configuration: Replace 'explorium.ai' in the Filter for External Meetings code node with your company domain Calendar Connection: Ensure OAuth2 credentials have calendar read permissions Explorium Credentials: Both API key and MCP credentials must be configured Output Timing: Schedule trigger should run with enough lead time before first meetings Rate Limits: Adjust loop batch sizes if hitting API rate limits during enrichment Slack Configuration: Select destination channel or user for research delivery Data Privacy: Research is based on publicly available professional information and company data This workflow acts as your automated sales researcher, preparing detailed intelligence reports every morning so your team walks into every meeting informed, prepared, and ready to have meaningful conversations that drive business forward.
by Monisha Panda
Description This n8n template demonstrates how to build an AI-powered Market Research Assistant using a multi-agent workflow. It helps you get a 360-degree view of a product idea or research topic by analysing: Customer insights and pain points Market size and macro/micro-economic trends Competitive landscape and alternatives The workflow mirrors how product managers and strategy teams conduct discovery — by breaking down research into parallel workstreams and then synthesizing insights into a single narrative. How it works Planner Agent The main agent receives your research topic as input and defines: Research objective Key areas of focus (Customer, Market, Competition) Assumptions and constraints Parallel Research Agents Based on the planner’s output, three specialist agents run in parallel: Customer Insights Agent Researches public sources such as articles and forums to infer customer behaviour, pain points, and existing tools. Market Scan Agent Analyses macro-economic and micro-economic trends, estimates TAM/SAM/SOM, and highlights key risks and assumptions. Competitor Insights Agent Identifies existing competitors and substitutes and summarises how they are positioned in the market. Synthesis Agent The outputs from all research agents are consolidated and analysed by a synthesis agent, which produces a market discovery memo. Final Output The discovery memo is generated as a document and sent to your email. How to use Trigger the workflow via the chat message node. Provide your research topic or product idea, along with optional context such as target market. The workflow runs automatically and delivers a structured discovery memo to your inbox. Setup Steps API credentials for: Groq or OpenAI (LLM) Documentero (document generation) A configured Documentero template Gmail OAuth or email credentials for delivery of memo
by Wan Dinie
AI Content Generator with Auto Pexels Image Matching This n8n template demonstrates how to use AI to generate engaging content and automatically find matching royalty-free images based on the content context. Use cases are many: Try creating blog posts with hero images, generating social media content with visuals or drafting email newsletters with relevant photos. Good to know At time of writing, Pexels offers free API access with up to 200 requests per hour. See Pexels API for updated info. OpenAI API costs vary by model. GPT-4.1 mini is cheaper while normal GPT-4.1 and above offer deeper content generation but cost more per request. Using the floating JavaScript node can reduce token usage by processing content and keyword extraction in a single prompt. How it works We'll collect a content topic or idea via a manual form trigger. OpenAI generates initial content based on your input topic. The AI extracts suitable keywords from the generated content to find matching images. The keywords are sent to Pexels API, which searches for relevant royalty-free stock images. OpenAI creates the final polished content that complements the selected image. The result is displayed as a formatted HTML template combining content and image together. How to use The manual trigger node is used as an example, but feel free to replace this with other triggers such as webhook or even a form. You can batch-generate multiple pieces of content by looping through a list, but of course, the processing will take longer and cost more. Requirements OpenAI API key (get one at https://platform.openai.com) Pexels API key (get free access at https://www.pexels.com/api) Valid content topics or ideas to generate from Customizing this workflow Optimize token usage**: Connect the floating "Extract Content and Image Keyword" JavaScript node to process everything in one prompt and minimize API costs. If you use this option, update the expressions in the "Pexels Image Search" node and "Create Suitable Content Including Image" node to reference the extracted keywords from the JS node. Upgrade to GPT-4.1, GPT-5.1, or GPT-5.2 for more advanced and creative content generation. Change the HTML template output to other formats like Markdown, plain text, or JSON for different publishing platforms. For long term, store the results in a database like Supabase or Google Sheets if you are planning to reuse the contents.
by Amit Mehta
N8N Workflow: Printify Automation - Update Title and Description - AlexK1919 This workflow automates the process of getting products from Printify, generating new titles and descriptions using OpenAI, and updating those products. How it Works This workflow automatically retrieves a list of products from a Printify store, processes them to generate new titles and descriptions based on brand guidelines and custom instructions, and then updates the products on Printify with the new information. It also interacts with Google Sheets to track the status of the products being processed. The workflow can be triggered both manually or by an update in a Google Sheet. Use Cases E-commerce Automation**: Automating content updates for a Printify store. Marketing & SEO**: Generating SEO-friendly or seasonal content for products using AI. Product Management**: Batch-updating product titles and descriptions without manual effort. Setup Instructions Printify API Credentials: Set up httpHeaderAuth credentials for Printify to allow the workflow to get and update products. OpenAI API Credentials: Provide an API key for OpenAI in the openAiApi credentials. Google Sheets Credentials: The workflow requires two separate Google Sheets credentials: one for the trigger (googleSheetsTriggerOAuth2Api) and another for appending/updating data (googleSheetsOAuth2Api). Google Sheets Setup: You need a Google Sheet to store product information and track the status of the updates. The workflow is linked to a specific spreadsheet. Brand Guidelines: The Brand Guidelines + Custom Instructions node must be updated with your specific brand details and any custom instructions for the AI. Workflow Logic Trigger: The workflow can be triggered manually or by an update in a Google Sheet when the upload column is changed to "yes". Get Product Info: It fetches the shop ID and then a list of all products from Printify. Process Products: The product data is split, and the workflow loops through each product. AI Content Generation: For each product, the Generate Title and Desc node uses OpenAI to create a new title and description based on the original content, brand guidelines, and custom instructions. Google Sheets Update: The workflow appends the product information and a "Product Processing" status to a Google Sheet. It then updates the row with the newly generated title and description, and changes the status to "Option added". Printify Update: The Printify - Update Product node sends a PUT request to the Printify API to apply the new title and description to the product. Node Descriptions | Node Name | Description | |-----------|-------------| | When clicking 'Test workflow' | A manual trigger for testing the workflow. | | Google Sheets Trigger | An automated trigger that starts the workflow when the upload column in the Google Sheet is updated. | | Printify - Get Shops | Fetches the list of shops associated with the Printify account. | | Printify - Get Products | Retrieves all products from the specified Printify shop. | | Brand Guidelines + Custom Instructions | A Set node to store brand guidelines and custom instructions for the AI. | | Generate Title and Desc | An OpenAI node that generates a new title and description based on the provided inputs. | | GS - Add Product Option | Appends a new row to a Google Sheet to track the processing status of a product. | | Update Product Option | Updates an existing row in the Google Sheet with the new product information and status. | | Printify - Update Product | A PUT request to the Printify API to update a product with new information. | Customization Tips You can swap out the Printify API calls with similar services like Printful or Vistaprint. Modify the Brand Guidelines + Custom Instructions node to change the brand name, tone, or specific instructions for the AI. Change the number of options the workflow should generate by modifying the Number of Options node. You can change the OpenAI model used in the Generate Title and Desc node, for example, from gpt-4o-mini to another model. Suggested Sticky Notes for Workflow "Update your Brand Guidelines before running this workflow. You can also add custom instructions for the AI node." "You can swap out the API calls to similar services like Printful, Vistaprint, etc." "Set the Number of Options you'd like for the Title and Description" Required Files 1V1gcK6vyczRqdZC_Printify_Automation_-Update_Title_and_Description-_AlexK1919.json: The main n8n workflow export for this automation. The Google Sheets template for this workflow. Testing Tips Run the workflow with the manual trigger to see the flow from start to finish. Change the upload column in the Google Sheet to "yes" to test the automated trigger. Verify that the new titles and descriptions are correctly updated on Printify. Suggested Tags & Categories Printify OpenAI
by Yusei Miyakoshi
Who’s it for Teams that start their day in Slack and want a concise, automated summary of yesterday’s emails—ops leads, PMs, founders, and anyone handling busy inboxes without writing code. What it does / How it works Runs every morning at 08:00 (cron 0 0 8 * * ), fetches all emails received *yesterday, and routes the result: if none were found, it posts a polite “no emails” notice; if emails exist, it aggregates them and asks an AI agent to produce a structured digest, then formats and posts to your chosen Slack channel. The flow uses **Gmail → If → Aggregate (Item Lists) → AI Agent (OpenRouter model with structured output) → Code (Slack formatter) → Slack. A set of sticky notes on the canvas explains each step and required inputs. How to set up Connect Gmail (OAuth2) and keep the default date window (yesterday → today at 00:00). Connect Slack (OAuth2) and select your target channel. Add OpenRouter credentials and pick a compact model (e.g., gpt-4o-mini). Keep the provided structured-output schema and formatter code. Adjust the schedule/timezone if needed (the fallback message includes an Asia/Tokyo example). Paste this description into the yellow sticky note at the top of the canvas. Requirements Gmail & Slack accounts with appropriate scopes OpenRouter API key stored in Credentials (no hard-coded keys) n8n Cloud or self-host with LangChain agent nodes enabled How to customize the workflow Narrow Gmail results with label/search filters (e.g., from:, subject:). Change the digest sections or tone in the AI Agent system prompt. Swap the model for cost/quality needs and tweak temperature/max tokens. Localize dates/timezones in the formatter code and Slack messages. Branch the output to email, Google Docs, or Sheets for archival. Security & publishing tips Rename all nodes clearly, do not hardcode API keys, remove real channel IDs/emails before sharing, and group end-user variables in a Set (Fields) node. Keep the sticky notes—they’re mandatory for reviewers.
by Cheng Siong Chin
How It Works This workflow automates end-to-end real estate investment analysis by aggregating data from multiple sources and applying AI-driven evaluation. It is designed for real estate investors, analysts, and portfolio managers seeking data-backed decisions without manual research overhead. The solution addresses the time-consuming challenge of collecting and analyzing fragmented real estate data—such as MLS listings, public records, demographic trends, and macroeconomic indicators—and transforms it into actionable insights using AI. Data is collected in parallel across four streams: MLS property data, public records, demographic information, and macroeconomic signals. These streams are consolidated into a unified dataset and processed by OpenAI GPT-4, using calculator tools and structured output parsing for quantitative analysis. Setup Steps Configure HTTP nodes with your MLS API, public records service Add OpenAI API key in Chat Model node credentials Connect Gmail account for acquisition team notifications Integrate Slack workspace and specify investor notification channel Set schedule trigger frequency in Schedule node for desired analysis cadence Prerequisites OpenAI API key, MLS data service access, public records API credentials Use Cases Real estate investment firms screening multiple markets simultaneously Customization Modify AI prompts to adjust investment criteria priorities, add custom financial metrics Benefits Reduces investment analysis time from hours to minutes, eliminates manual data aggregation errors
by Trung Tran
📄 Auto Extract Contacts from Business Cards to Sheet With GPT4o > This smart workflow extracts names, phone numbers, emails, and more from uploaded name card photos using AI, then logs them neatly into your Google Sheet. No typing. No mess. Just upload and go. 👤 Who’s it for Sales & Business Development Teams Recruiters & Talent Acquisition Specialists Event Teams collecting business cards Admins who manage contact databases manually ⚙️ How it works / What it does This workflow automates the extraction of contact details from uploaded name card (business card) images and stores them in a structured Google Sheet for easy tracking and follow-up. Workflow Steps: User uploads one or more name card images through a web form. The uploaded files are saved to a Google Drive folder for archiving. A smart AI agent (with OCR and GPT capabilities) scans each image and extracts relevant contact data into structured JSON format. Data is transformed, cleaned (e.g., removing + from phone numbers), and filtered. Valid contacts are appended to a Google Sheet for central tracking and future use. 🛠 How to set up Create a Form Allow file upload (JPG/PNG format). Label it as “Name Card Uploader” with a clear description. Upload to Google Drive Use the Google Drive node to store uploaded images. Configure Smart Agent Use GPT-4o or similar model with OCR capability. Apply a structured output parser to extract contact fields like name, phone, email, company, etc. Transform Data Use the Code node to clean and structure contact info. Strip out unwanted characters from phone numbers (e.g., +). Filter Invalid Records Remove entries with no meaningful contact data. Append to Google Sheets Use the Google Sheets node with "Append Sheet Row". Map fields to columns like Name, Phone, Email, etc. ✅ Requirements n8n workflow environment Google Drive integration (for file storage) Google Sheets integration (for storing contacts) GPT-4o or any image-capable LLM Clear name card images (PNG/JPG, readable text) (Optional) Slack/email integration for notifications 🧩 How to customize the workflow CRM Sync**: Connect to platforms like HubSpot, Salesforce, or Zoho. Validation Logic**: Ensure records contain key fields like name or email before writing. Uploader Info**: Attach submitter metadata to each contact record. Language Adaptation**: Adjust extracted field labels/output to target your preferred language. Batch Upload**: Handle multiple cards in a single image or multiple uploads in one go.
by Kirill Khatkevich
This workflow transforms raw Meta Ads data into actionable, expert-level insights. It acts as a virtual performance marketer, analyzing each creative's performance, comparing it against your historical benchmarks, and delivering clear recommendations on whether to scale, optimize, or stop the ad. By running parallel analyses with both OpenAI and Gemini, it provides a unique, dual-perspective evaluation. This template is the perfect sequel to our "Automation of Creative Testing" workflow but also works powerfully on its own. Use Case Manually sifting through ads manager reports is tedious, and identifying true winners from early data is challenging. This workflow solves these problems by automating the entire analysis pipeline. It's designed for performance marketing teams who need to: Make faster, data-driven decisions on which creatives to scale. Get objective, AI-powered second opinions on ad performance. Systematically evaluate creatives against consistent, pre-defined benchmarks. Maintain a central log in Google Sheets with both raw metrics and qualitative AI analysis. Save hours spent on manual data crunching and report generation. How it Works The workflow is structured into three logical stages: Configuration & Data Ingestion: A central ⚙️ Set parameters node holds all key variables: the data source (Meta or Sheets), campaign_id, and, most importantly, your historical performance benchmarks as a simple text block. An IF node directs the workflow to fetch data either directly from a Meta Ads campaign or from a specified Google Sheet (ideal for analyzing a curated list of ads). Data Processing & AI Analysis (Parallel Execution): After fetching raw performance data (spend, impressions, clicks, actions), the workflow splits into three parallel branches for maximum resilience: Branch 1 (Data Logging): Immediately writes or updates a row in Google Sheets with the raw metrics for the creative. This ensures no data is lost, even if the AI analysis fails. Branch 2 (OpenAI Analysis): Prepares a CSV string of the creative's data, sends it along with the benchmarks to an OpenAI model (e.g., GPT-4), and instructs it to return a structured JSON analysis. Branch 3 (Gemini Analysis): Performs the exact same process but using Google's Gemini model via a LangChain agent, providing a second, independent evaluation. Results Aggregation: The results from both AI models are received as structured JSON. Two final Google Sheets nodes take these results and update the original row (matching by AdID), adding the evaluation, significance, summary, and recommendation into separate columns. The final sheet contains a complete picture: raw data side-by-side with analyses from two different AIs. Setup Instructions Credentials: 1.1 Connect your Meta Ads account. 1.2 Connect your Google account (for Sheets). 1.3 Connect your OpenAI account. 1.4 Connect your Google Gemini (Palm) account. The ⚙️ Set parameters Node: This is the central control panel. Open this first Set node and customize it: source: Set to "Meta" to pull from a campaign or "sheets" to read from a Google Sheet. campaign_id: If source is "Meta", enter your Meta Campaign ID here. benchmarks_data: This is critical. Paste your own historical performance data here as a CSV-formatted text block. The template includes an example. For best results, use an export from Ads Manager of your top-performing creatives, including key metrics. Google Sheets Nodes: There are three Google Sheets nodes that write data. You need to configure all of them to point to the same spreadsheet and sheet. Ad metrics (for raw metrics): Select your spreadsheet and sheet. Ensure "Operation" is set to Append or Update. Ad data from OpenAI (for OpenAI results): Select the same spreadsheet/sheet. Set "Operation" to Update. Ad data from Gemini (for Gemini results): Select the same spreadsheet/sheet. Set "Operation" to Update. Make sure your sheet has columns for all the data fields, e.g., AdID, FileName, spend, impressions, evaluation, summary, recommendation, evaluation G, summary G, etc. Activate the Workflow: Set your desired frequency in the Schedule Trigger node. Save and activate the workflow. Further Ideas & Customization This powerful analysis engine can be extended even further: Add a "Decision" Node: After the AI analyses are logged, add a final step that compares their recommendations. If both AIs say "scale", automatically increase the ad's budget via the Meta Ads API. Create Summary Reports: Add a branch that, after all ads are processed, calculates an overall summary (e.g., "3 creatives recommended for scaling, 5 for stopping") and sends it to a Slack channel. Dynamic Benchmarks: Instead of pasting benchmarks into the Set node, create a step that reads them from a dedicated "Benchmarks" tab in your Google Sheet, making them even easier to update. Experiment with Prompts and Benchmarks: The quality of the AI analysis is highly dependent on the quality of your input. Don't be afraid to: -- Refine the prompts in the AI Agent and Message a model nodes to better match your specific business context and KPIs. -- Curate your benchmarks_data. Test different sets of benchmark data (e.g., "last 30 days top performers" vs. "all-time best") to see how it influences the AI's recommendations. Finding the right combination of prompt and data is key to unlocking the most effective insights.
by Arthur Dimeglio
What this workflow does Automatically: fetches fresh news, filters out aggregators/PR wires and duplicates, writes a human-sounding LinkedIn post with GPT, downloads the article image to verify it’s usable, publishes to LinkedIn (with or without media), and logs the posted titles in Firestore to avoid re-posting. Runs on a daily schedule (cron) and supports two post variants: • Case 1: article has a description → richer post • Case 2: no description → short, still human and casual ⸻ How it works (high level flow) • Schedule Trigger (0 10,12,19,21 * * *): runs at 10:00, 12:00, 19:00, 21:00 (server timezone). • Firestore (Get Previous News Titles): loads previously posted titles (document asma/x20) to de-dupe. • HTTP Request (API NEWS): calls newsapi.org with query “AI Startup” for example, last 24–48h window, searchIn=title, sorted by publishedAt. • Code: Select Articles: • excludes Biztoc and common aggregators/PR wires (Techmeme, TheFly, PRNewswire, GlobeNewswire, MarketWatch press-releases, Medium, Substack, Yahoo consent, etc.), • requires valid URL + image, • groups by topic (normalized title + domain) and picks the best representative, • sorts by recency and returns up to 10 unique articles. • IF (URL & De-dupe checks): ensures link present and not already posted (compares against Firestore titles). • IF (Description Checker): branches on presence of description. • LLM Agents (2 prompts): generate a casual, human LinkedIn post in English (no emojis/links/markdown, 2–3 hashtags). • Post setup: cleans the text, passes the image URL forward. • HTTP Request (Image Downloader): retrieves the image as a file to confirm the link works. • LinkedIn Publisher: • If image OK → posts with media. • Otherwise → posts text-only. • Time Checkers + Firestore Upserts: after a successful publish, writes the article’s title to Firestore (asma/x20 fields title10, title12, title19, title21) so it won’t be posted again at other times. ⸻ Credentials & prerequisites • NewsAPI.org: API key (free tier works to start; mind rate limits). • LinkedIn OAuth2: connected account with permission to create posts on your profile (uses “Person” target in the node). • Google Firebase (Firestore): Service Account with read/write to the asma collection. The workflow uses document ID x20. ⸻ Setup (5 minutes) Import the workflow and open it in n8n. In API NEWS, set your NewsAPI key in the query param apiKey. In Get Previous News Titles and Firebase Article Saver [1–8], attach your Google Service Account and confirm projectId, collection=asma. In LinkedIn Publisher [1–4], attach your LinkedIn OAuth credential and ensure the Person is your profile URN. (Optional) Adjust the cron in Hourly trigger (server timezone). (Optional) Change the search query (q=AI startup), language, or time window in API NEWS. Enable the workflow. ⸻ Customization tips • Search scope: edit q, language, from/to in API NEWS to cover your niche. • Aggregator policy: tweak the aggregatorDomains set in the Select Articles code node. • Post voice: modify the LLM prompt (keeps the “human, slightly messy” tone). • Hashtags: the prompt ends with 2–3 simple tags (#AI #Startups #Innovation) — change as you like. • Posting times: change the cron or the downstream time-checker logic to map specific titles → time slots. • No-image fallback: text-only path is already handled; replace with a placeholder image if you prefer always-with-media. ⸻ Notes & constraints • Timezone: Schedule Trigger uses the n8n server timezone; adjust if your LinkedIn audience is elsewhere. • De-dupe: this template stores last posted titles in one Firestore doc (asma/x20) under title10, title12, title19, title21. You can change the schema or keep a rolling history. • Filtering: items missing URL or image are skipped by design. Yahoo consent pages are also skipped. • LLM costs: posts are short; usage is modest, but keep an eye on your OpenAI billing. • NewsAPI limits: free plans throttle requests; consider caching or widening the time window if you hit limits. ⸻ Troubleshooting • Nothing posts: check NewsAPI quota/response, then see the URL checker and Description Checker branches. • Image errors: some sites block hotlinking; the image download step will fall back to text-only posting. • Duplicates appeared: verify Firestore upserts executed after posting and that your comparison uses the right fields. • Wrong hours: confirm your n8n instance timezone and the cron expression. ⸻ Why this template You get a robust “news → LinkedIn” autoposter that feels authentically human (no corporate vibes), avoids low-quality aggregators, prevents duplicates, and gracefully handles media — all with clean, modular nodes that are easy to tweak.