by Gaetano Castaldo
Web-to-Odoo Lead Funnel (UTM-ready) Create crm.lead records in Odoo from any webform via a secure webhook. The workflow validates required fields, resolves UTMs by name (source, medium, campaign) and writes standard lead fields in Odoo. Clean, portable, and production-ready. Key features ✅ Secure Webhook with Header Auth (x-webhook-token) ✅ Required fields validation (firstname, lastname, email) ✅ UTM lookup by name (utm.source, utm.medium, utm.campaign) ✅ Clean consolidation before create (name, contact_name, email_from, phone, description, type, UTM IDs) ✅ Clear HTTP responses: 200 success / 400 bad request Prerequisites Odoo with Leads enabled (CRM → Settings → Leads) Odoo API Key** for your user (use it as the password) n8n Odoo credentials: URL, DB name, Login, API Key Public URL** for the webhook (ngrok/Cloudflare/reverse proxy). Ensure WEBHOOK_URL / N8N_HOST / N8N_PROTOCOL / N8N_PORT are consistent Header Auth secret** (e.g., x-webhook-token: <your-secret>) How it works Ingest – The Webhook receives a POST at /webhook(-test)/lead-webform with Header Auth. Validate – An IF node checks required fields; if missing → respond with 400 Bad Request. UTM lookup – Three Odoo getAll queries fetch IDs by name: utm.source → source_id utm.medium → medium_id utm.campaign → campaign_id If a record is not found, the corresponding ID remains null. Consolidate – Merge + Code nodes produce a single clean object: { name, contact_name, email_from, phone, description, type: "lead", campaign_id, source_id, medium_id } Create in Odoo – Odoo node (crm.lead → create) writes the lead with standard fields + UTM Many2one IDs. Respond – Success node returns 200 with { status: "ok", lead_id }. Payload (JSON) Required: firstname, lastname, email Optional: phone, notes, source, medium, campaign { "firstname": "John", "lastname": "Doe", "email": "john.doe@example.com", "phone": "+393331234567", "notes": "Wants a demo", "source": "Ads", "medium": "Website", "campaign": "Spring 2025" } Quick test curl -X POST "https://<host>/webhook-test/lead-webform" \ -H "Content-Type: application/json" \ -H "x-webhook-token: <secret>" \ -d '{"firstname":"John","lastname":"Doe","email":"john@ex.com", "phone":"+39333...", "notes":"Demo", "source":"Ads","medium":"Website","campaign":"Spring 2025"}' Notes Recent Odoo versions do not use the mobile field on leads/partners: use phone instead. Keep secrets and credentials out of the template; the user will set their own after import. If you want to auto-create missing UTM records, add an IF after each getAll and a create on utm.*.
by Paul Abraham
This n8n template demonstrates how to automatically generate accurate subtitles from any video and optionally translate them into other languages. By combining FFmpeg, OpenAI Whisper, and LibreTranslate, this workflow turns video audio into ready-to-use .srt subtitle files that can be delivered via email. Use cases Auto-generate subtitles for training or educational videos Translate videos into multiple languages for global reach Create accessibility-friendly content with minimal effort Build a backend for media platforms to process subtitles automatically Good to know This workflow requires a self-hosted n8n instance since it uses the Execute Command node. FFmpeg is used for audio extraction and must be installed on the host machine. OpenAI Whisper (Local) is used for transcription, providing highly accurate speech-to-text results. LibreTranslate is used for translating subtitles into other languages. How it works Webhook Trigger – Starts when a video URL is received. Download Video – Fetches the video file from the provided link. Extract Audio (FFmpeg) – Separates audio from the video file. Run Whisper (Local) – Transcribes the extracted audio into text subtitles. Read SRT File – Loads the generated .srt subtitle file. Merge Paths – Combines both original and translated subtitle flows. Translate Subtitles (LibreTranslate) – Translates the .srt file into the target language. Write Translated SRT – Creates a translated .srt file for delivery. Send a Message (Gmail) – Sends the final subtitle file (original or translated) via email. How to use Clone this workflow into your self-hosted n8n instance. Ensure FFmpeg and Whisper are installed and available via your server’s shell path. Add your LibreTranslate service credentials for translation. Configure Gmail (or another email service) to send subtitle files. Trigger the workflow by sending a video URL to the webhook, and receive subtitle files in your inbox. Requirements Self-hosted n8n instance FFmpeg installed and available on the server OpenAI Whisper (Local) installed and callable via command line LibreTranslate service with API credentials Gmail (or any email integration) for delivery Customising this workflow Replace Gmail with Slack, Telegram, or Drive uploads for flexible delivery. Switch LibreTranslate with DeepL or Google Translate for higher-quality translations. Add post-processing steps such as formatting .srt files or embedding subtitles back into the video. Use the workflow as a foundation for building a multi-language subtitle automation pipeline.
by Matheus Pedrosa
Workflow Overview Keeping API documentation updated is a challenge, especially when your endpoints are powerful n8n webhooks. This project solves that problem by turning your n8n instance into a self-documenting API platform. This workflow acts as a central engine that scans your entire n8n instance for designated webhooks and automatically generates a single, beautiful, and interactive HTML documentation page. By simply adding a standard Set node with specific metadata to any of your webhook workflows, you can make it instantly appear in your live documentation portal, complete with code examples and response schemas. The final output is a single, callable URL that serves a professional, dark-themed, and easy-to-navigate documentation page for all your automated webhook endpoints. Key Features: Automatic Discovery:** Scans all active workflows on your instance to find endpoints designated for documentation. Simple Configuration via a Set Node:** No custom nodes needed! Just add a Set node named API_DOCS to any workflow you want to document and fill in a simple JSON structure. Rich HTML Output:** Dynamically generates a single, responsive, dark-mode HTML page that looks professional right out of the box. Interactive UI:** Uses Bootstrap accordions, allowing users to expand and collapse each endpoint to keep the view clean and organized. Developer-Friendly:** Automatically generates a ready-to-use cURL command for each endpoint, making testing and integration incredibly fast. Zero Dependencies:** The entire solution runs within n8n. No need to set up or maintain external documentation tools like Swagger UI or Redoc. Setup Instructions: This solution has two parts: configuring the workflows you want to document, and setting up this generator workflow. Part 1: In Each Workflow You Want to Document Next to your Webhook trigger node, add a Set node. Change its name to API_DOCS. Create a single variable named jsonOutput (or docsData) and set its type to JSON. Paste the following JSON structure into the value field and customize it with your endpoint's details: { "expose": true, "webhookPath": "PASTE_YOUR_WEBHOOK_PATH_HERE", "method": "POST", "summary": "Your Endpoint Summary", "description": "A clear description of what this webhook does.", "tags": [ "Sales", "Automation" ], "requestBody": { "exampleKey": "exampleValue" }, "successCode": 200, "successResponse": { "status": "success", "message": "Webhook processed correctly." }, "errorCode": 400, "errorResponse": { "status": "error", "message": "Invalid input." } } Part 2: In This Generator Workflow n8n API Node: Configure the GetWorkflows node with your n8n API credentials. It needs permission to read workflows. Configs Node: Customize the main settings for your documentation page, like the title (name_doc), version, and a short description. Webhook Trigger: The Webhook node at the start (default path is /api-doc) provides the final URL for your documentation page. Copy this URL and open it in your browser. Required Credentials: n8n API Credentials: To allow this workflow to read your other workflows.
by Nexio_2000
This n8n template demonstrates how to export all icons metadata from an Iconfinder account into an organized format with previews, names, iconset names and tags. It generates HTML and CSV outputs. Good to know Iconfinder does not provide a built-in feature to export all icon data at once for contributors, which motivated the creation of this workflow. The workflow exports all iconsets for selected user account and can handle large collections. Preview image URLs are extracted in a consistent size (e.g., 128x128) for easy viewing. Basic icon metadata, including tags and iconset names is included for reference or further automation. How it works The workflow fetches all iconsets from your Iconfinder account. The workflow loops through all your iconsets, handling pagination automatically if an iconset contains more than 100 icons. Each icon is processed to retrieve its metadata, including name, tags, preview image URLs, and the iconset name it belongs to. An HTML file with a preview table and a CSV file with all icon details are generated. How to use Retrieve your User ID – A dedicated node in the workflow is available to fetch your Iconfinder user ID. This ensures the workflow knows which contributor account to access. Setup API access – The workflow includes a setup node where you provide your Iconfinder API key. This node passes the authorization token to all subsequent HTTP request nodes, so you don’t need to manually enter it multiple times. Trigger the workflow – You can start it manually or attach it to a different trigger, such as a webhook or schedule. Export Outputs – The workflow generates an HTML file with preview images and a CSV file containing all metadata. Both files are ready for download or further processing. Requirements Iconfinder account with an API key. Customising this workflow You can adjust the preview size or choose which metadata to include in HTML and CSV outputs. Combine with other workflows to automate asset cataloging.
by vinci-king-01
Deep Research Agent with AI Analysis and Multi-Source Data Collection 🎯 Target Audience Market researchers and analysts Business intelligence teams Academic researchers and students Content creators and journalists Product managers conducting market research Consultants performing competitive analysis Data scientists gathering research data Marketing teams analyzing industry trends 🚀 Problem Statement Manual research processes are time-consuming, inconsistent, and often miss critical information from multiple sources. This template solves the challenge of automating comprehensive research across web, news, and academic sources while providing AI-powered analysis and actionable insights. 🔧 How it Works This workflow automatically conducts deep research on any topic using AI-powered web scraping, collects data from multiple source types, and provides comprehensive analysis with actionable insights. Key Components Webhook Trigger - Receives research requests and initiates the automated research process Research Configuration Processor - Validates and processes research parameters, generates search queries Multi-Source AI Scraping - Uses ScrapeGraphAI to collect data from web, news, and academic sources Data Processing Engine - Combines and structures data from all sources for analysis AI Research Analyst - Uses GPT-4 to provide comprehensive analysis and insights Data Storage - Stores all research findings in Google Sheets for historical tracking Response System - Returns structured research results via webhook response 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | sessionId | String | Unique research session identifier | "research_1703123456789" | | query | String | Research query that was executed | "artificial intelligence trends" | | timestamp | DateTime | When the research was conducted | "2024-01-15T10:30:00Z" | | analysis | Text | AI-generated comprehensive analysis | "Executive Summary: AI trends show..." | | totalSources | Number | Total number of sources analyzed | 15 | 🛠️ Setup Instructions Estimated setup time: 20-25 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials OpenAI API account and credentials Google Sheets account with API access Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up OpenAI Credentials Add OpenAI API credentials Enter your API key from OpenAI dashboard Ensure you have access to GPT-4 model Test the connection to verify API access 4. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Create a new spreadsheet for research data Configure the sheet name (default: "Research_Data") 5. Configure Research Parameters Update the webhook endpoint URL Customize default research parameters in the configuration processor Set appropriate search query generation logic Configure research depth levels (basic, detailed, comprehensive) 6. Test the Workflow Send a test webhook request with research parameters Verify data collection from all source types Check Google Sheets for proper data storage Validate AI analysis output quality 🔄 Workflow Customization Options Modify Research Sources Add or remove source types (web, news, academic) Customize search queries for specific industries Adjust source credibility scoring algorithms Implement custom data extraction patterns Extend Analysis Capabilities Add industry-specific analysis frameworks Implement comparative analysis between sources Create custom insight generation rules Add sentiment analysis for news sources Customize Data Storage Add more detailed metadata tracking Implement research versioning and history Create multiple sheet tabs for different research types Add data export capabilities Output Customization Create custom response formats Add research summary generation Implement citation and source tracking Create executive dashboard integration 📈 Use Cases Market Research**: Comprehensive industry and competitor analysis Academic Research**: Literature reviews and citation gathering Content Creation**: Research for articles, reports, and presentations Business Intelligence**: Strategic decision-making support Product Development**: Market validation and trend analysis Investment Research**: Due diligence and market analysis 🚨 Important Notes Respect website terms of service and robots.txt files Implement appropriate delays between requests to avoid rate limiting Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider data privacy and compliance requirements Validate research findings from multiple sources 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status OpenAI API errors: Check API key and model access permissions Google Sheets permission errors: Check OAuth2 scope and permissions Research data quality issues: Review search query generation logic Rate limiting: Adjust request frequency and implement delays Webhook response errors: Check response format and content Support Resources: ScrapeGraphAI documentation and API reference OpenAI API documentation and model specifications n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations
by David Olusola
💰 Auto-Send PDF Invoice When Stripe Payment is Received This workflow automatically generates a PDF invoice every time a successful payment is received in Stripe, then emails the invoice to the customer via Gmail. Perfect for freelancers, SaaS businesses, and service providers who want to automate billing without manual effort. ⚙️ How It Works Stripe Payment Webhook Listens for successful payment events (payment_intent.succeeded). Triggers the workflow whenever a new payment is made. Normalize Payment Data A Code node extracts and formats details like: Payment ID Amount & currency Customer name & email Payment date Description Generates a unique invoice number. Generate Invoice HTML A Code node builds a professional invoice template in HTML. Data is dynamically inserted (amount, customer info, invoice number). Output prepared for PDF generation. Send Invoice Email The Gmail node sends an email to the customer. Invoice is attached as a PDF file. Includes a confirmation message with payment details. 🛠️ Setup Steps 1. Stripe Webhook In your Stripe Dashboard: Navigate to Developers → Webhooks Add a new endpoint with your Webhook URL from the n8n Webhook node. Select event: payment_intent.succeeded 2. Gmail Setup In n8n, connect your Gmail OAuth2 credentials. Emails will be sent directly from your Gmail account. 3. Customize Invoice Open the Generate Invoice HTML node. Replace "Your Company Name" with your actual business name. Adjust invoice branding, colors, and layout as needed. 📧 Example Email Sent Subject: Invoice INV-123456789 - Payment Confirmation Body: Dear John Doe, Thank you for your payment! Please find your invoice attached. Payment Details: Amount: USD 99.00 Payment ID: pi_3JXXXXXXXX Date: 2025-08-29 Best regards, Your Company Name (Attached: invoice_INV-123456789.pdf) ⚡ With this workflow, every Stripe payment automatically creates and delivers a polished PDF invoice — no manual work required.
by KlickTipp
Community Node Disclaimer: This workflow uses KlickTipp community nodes. Introduction This workflow automates Stripe checkout confirmations by capturing transaction data and syncing it into KlickTipp. Upon successful checkout, the contact's data is enriched with purchase details and tagged to trigger a personalized confirmation campaign in KlickTipp. Perfect for digital product sellers, course creators, and service providers seeking an end-to-end automated sales confirmation process. Benefits Instant confirmation emails**: Automatically notify customers upon successful checkout—no manual processing needed. Structured contact data**: Order data (invoice link, amount, transaction ID, products) is stored in KlickTipp custom fields. Smart campaign triggering**: Assign dynamic tags to start automated confirmation or fulfillment sequences. Seamless digital delivery**: Ideal for pairing with tools like Memberspot or Mentortools to unlock digital products post-checkout. Key Features Stripe Webhook Trigger**: Triggers on Triggers on Checkout Session.completed events events. Captures checkout data including product names, order number, and total amount. KlickTipp Contact Sync**: Adds or updates contacts in KlickTipp. Maps Stripe data into custom fields Assigns a tag such as Stripe Checkout to initiate a confirmation campaign. Router Logic (optional)**: Branches logic based on product ID or Stripe payment link. Enables product-specific campaigns or follow-ups. Setup Instructions KlickTipp Preparation Create the following custom fields in your KlickTipp account: | Field Name | Field Type | |--------------------------|------------------| | Stripe \| Products | Text | | Stripe \| Total | Decimal Number | | Stripe \| Payment ID | Text | | Stripe \| Receipt URL | URL | Define a tag for each product or confirmation flow, e.g., Order: Course XYZ. Credential Configuration Connect your Stripe account using an API key from the Stripe Dashboard. Authenticate your KlickTipp connection with username/password credentials (API access required). Field Mapping and Workflow Alignment Map Stripe output fields to the KlickTipp custom fields. Assign the tag to trigger your post-purchase campaign. Ensure that required data like email and opt-in info are present for the contact to be valid. Testing and Deployment Click on Inactive to activate the scenario. Perform a test payment using a Stripe product link. Verify in KlickTipp: The contact appears with email and opt-in status. Custom fields for Stripe are filled. The campaign tag is correctly applied and confirmation email is sent. ⚠️ Note: Use real or test-mode API keys in Stripe depending on your testing environment. Stripe events may take a few seconds to propagate. Campaign Expansion Ideas Launch targeted upsell flows based on the product tag. Use confirmation placeholders like: [[Stripe | Products]] [[Stripe | Total]] [[Stripe | Payment ID]] [[Stripe | Products]] Route customers to different product access portals (e.g., Memberspot, Mentortools). Send follow-up content over multiple days using KlickTipp sequences. Customization You can extend the scenario using a switch node to: Assign different tags per used payment link Branch into upsell or membership activation flows Chain additional automations like CRM entry, Slack notification, or invoice creation. Resources: Use KlickTipp Community Node in n8n Automate Workflows: KlickTipp Integration in n8n
by Gaurav
Automated Email Verification for Google Sheets This n8n template demonstrates how to automatically validate email addresses from your Google Sheets using a reliable email verification API. Perfect for cleaning contact lists, validating leads, and ensuring email deliverability before marketing campaigns. Use cases are many: Lead qualification for sales teams, contact list cleaning for marketing, subscriber validation for newsletters, or CRM data hygiene maintenance! Good to know The rapid-email-verifier API is free for up to 1,000 verifications per month Each email verification typically takes less than 500ms to complete The workflow runs automatically every hour, checking for new entries Only processes emails that haven't been verified yet, preventing duplicate API calls How it works Monitor Google Sheets: The trigger watches your spreadsheet for new email entries every hour Smart Filtering: Only emails with empty "Email Verified" columns are processed to avoid duplicates Batch Processing: Emails are processed one by one to respect API rate limits and ensure reliability API Verification: Each email is sent to the rapid-email-verifier service which returns validation status Results Update: The original sheet is updated with verification results (valid/invalid/unknown) using the Serial Number as a match key The verification accuracy is consistently above 95% and shows excellent detection of invalid, disposable, and risky email addresses! How to use The Google Sheets trigger monitors your spreadsheet automatically, but you can also test manually Simply add new rows with email addresses to your connected Google Sheet Leave the "Email Verified" column empty for new entries The workflow will automatically process and update the verification status Technically, you can process unlimited emails, but consider API rate limits and costs for large batches. Requirements Google Sheets account** with a spreadsheet containing columns: SrNo, Name, Email, Email Verified Google Sheets credentials** configured in n8n for both trigger and update operations Internet connection** for API access (no additional API key required for rapid-email-verifier) Customising this workflow Email verification can be enhanced for many use-cases: Add webhook trigger** for real-time verification when leads are captured Connect to CRM systems** like HubSpot or Salesforce for direct integration Add email categorization** to separate personal vs business emails Include bounce detection** by connecting to your email service provider Set up notifications** to alert when invalid emails are detected in important lists This template is perfect for marketing managers, sales professionals, data analysts, and anyone managing contact databases who needs reliable email validation!
by PDF Vector
Overview Organizations struggle to make their document repositories searchable and accessible. Users waste time searching through lengthy PDFs, manuals, and documentation to find specific answers. This workflow creates a powerful API service that instantly answers questions about any document or image, perfect for building customer support chatbots, internal knowledge bases, or interactive documentation systems. What You Can Do This workflow creates a RESTful webhook API that accepts questions about documents and returns intelligent, contextual answers. It processes various document formats including PDFs, Word documents, text files, and images using OCR when needed. The system maintains conversation context through session management, caches responses for performance, provides source references with page numbers, handles multiple concurrent requests, and integrates seamlessly with chatbots, support systems, or custom applications. Who It's For Perfect for developer teams building conversational interfaces, customer support departments creating self-service solutions, technical writers making documentation interactive, organizations with extensive knowledge bases, and SaaS companies wanting to add document Q&A features. Ideal for anyone who needs to make large document repositories instantly searchable through natural language queries. The Problem It Solves Traditional document search returns entire pages or sections, forcing users to read through irrelevant content to find answers. Support teams repeatedly answer the same questions that are already documented. This template creates an intelligent Q&A system that provides precise, contextual answers to specific questions, reducing support tickets by up to 60% and improving user satisfaction. Setup Instructions Install the PDF Vector community node from n8n marketplace Configure your PDF Vector API key Set up the webhook URL for your API endpoint Configure Redis or database for session management Set response caching parameters Test the API with sample documents and questions Key Features RESTful API Interface**: Easy integration with any application Multi-Format Support**: Handle PDFs, Word docs, text files, and images OCR Processing**: Extract text from scanned documents and screenshots Contextual Answers**: Provide relevant responses with source citations Session Management**: Enable conversational follow-up questions Response Caching**: Improve performance for frequently asked questions Analytics Tracking**: Monitor usage patterns and popular queries Error Handling**: Graceful fallbacks for unsupported documents API Usage Example POST https://your-n8n-instance.com/webhook/doc-qa Content-Type: application/json { "documentUrl": "https://example.com/user-manual.pdf", "question": "How do I reset my password?", "sessionId": "user-123", "includePageNumbers": true } Customization Options Add authentication and rate limiting for production use, implement multi-document search across entire repositories, create specialized prompts for technical documentation or legal documents, add automatic language detection and translation, build conversation history tracking for better context, integrate with Zendesk, Intercom, or other support systems, and enable direct file upload support for local documents. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by vinci-king-01
Creative Asset Manager with ScrapeGraphAI Analysis and Brand Compliance 🎯 Target Audience Creative directors and design managers Marketing teams managing brand assets Digital asset management (DAM) administrators Brand managers ensuring compliance Content creators and designers Marketing operations teams Creative agencies managing client assets Brand compliance officers 🚀 Problem Statement Managing creative assets manually is inefficient and error-prone, often leading to inconsistent branding, poor organization, and compliance issues. This template solves the challenge of automatically analyzing, organizing, and ensuring brand compliance for creative assets using AI-powered analysis and automated workflows. 🔧 How it Works This workflow automatically processes uploaded creative assets using ScrapeGraphAI for intelligent analysis, generates comprehensive tags, checks brand compliance, organizes files systematically, and maintains a centralized dashboard for creative teams. Key Components Asset Upload Trigger - Webhook endpoint that activates when new creative assets are uploaded ScrapeGraphAI Asset Analyzer - Uses AI to extract detailed information from visual assets Tag Generator - Creates comprehensive, searchable tags based on asset analysis Brand Compliance Checker - Evaluates assets against brand guidelines and standards Asset Organizer - Creates organized folder structures and standardized naming Creative Team Dashboard - Updates Google Sheets with organized asset information 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | asset_id | String | Unique asset identifier | "asset_1703123456789_abc123def" | | name | String | Standardized filename | "image-social-media-2024-01-15T10-30-00.jpg" | | path | String | Storage location path | "/creative-assets/2024/01/image/social-media" | | asset_type | String | Type of creative asset | "image" | | dimensions | String | Asset dimensions | "1920x1080" | | file_format | String | File format | "jpg" | | primary_colors | Array | Extracted color palette | ["#FF6B35", "#004E89"] | | content_description | String | AI-generated content description | "Modern office workspace with laptop" | | text_content | String | Any text visible in asset | "Welcome to our workspace" | | style_elements | Array | Detected style characteristics | ["modern", "minimalist"] | | generated_tags | Array | Comprehensive tag list | ["high-resolution", "brand-logo", "social-media"] | | usage_context | String | Suggested usage context | "social-media" | | brand_elements | Array | Detected brand elements | ["logo", "typography"] | | compliance_score | Number | Brand compliance score (0-100) | 85 | | compliance_status | String | Approval status | "approved-with-warnings" | | compliance_issues | Array | List of compliance problems | ["Non-brand colors detected"] | | upload_date | DateTime | Asset upload timestamp | "2024-01-15T10:30:00Z" | | searchable_keywords | String | Search-optimized keywords | "image social-media modern brand-logo" | 🛠️ Setup Instructions Estimated setup time: 25-30 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access File upload system or DAM integration Brand guidelines document (for compliance configuration) Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Create a new spreadsheet for creative asset management Configure the sheet name (default: "Creative Assets Dashboard") 4. Configure Webhook Trigger Set up the webhook endpoint for asset uploads Configure the webhook URL in your file upload system Ensure asset_url parameter is passed in webhook payload Test webhook connectivity 5. Customize Brand Guidelines Update the Brand Compliance Checker node with your brand colors Configure approved file formats and size limits Set required brand elements and fonts Define resolution standards and quality requirements 6. Configure Asset Organization Customize folder structure preferences Set up naming conventions for different asset types Configure metadata extraction preferences Set up search optimization parameters 7. Test and Validate Upload a test asset to trigger the workflow Verify all analysis steps complete successfully Check Google Sheets for proper data formatting Validate brand compliance scoring 🔄 Workflow Customization Options Modify Analysis Parameters Adjust ScrapeGraphAI prompts for specific asset types Customize tag generation algorithms Modify color analysis sensitivity Add industry-specific analysis criteria Extend Brand Compliance Add more sophisticated brand guideline checks Implement automated correction suggestions Include legal compliance verification Add accessibility compliance checks Customize Organization Structure Modify folder hierarchy based on team preferences Implement custom naming conventions Add version control and asset history Configure backup and archiving rules Output Customization Add integration with DAM systems Implement asset approval workflows Create automated reporting and analytics Add team collaboration features 📈 Use Cases Brand Asset Management**: Automatically organize and tag brand assets Compliance Monitoring**: Ensure all assets meet brand guidelines Creative Team Collaboration**: Centralized asset management and sharing Marketing Campaign Management**: Organize assets by campaign and context Asset Discovery**: AI-powered search and recommendation system Quality Control**: Automated quality and compliance checks 🚨 Important Notes Respect ScrapeGraphAI API rate limits and terms of service Implement appropriate delays between requests to avoid rate limiting Regularly review and update brand guidelines in the compliance checker Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider data privacy and copyright compliance for creative assets Ensure proper backup and version control for important assets 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Webhook trigger failures: Check webhook URL and payload format Google Sheets permission errors: Check OAuth2 scope and permissions Asset analysis errors: Review the ScrapeGraphAI prompt configuration Brand compliance false positives: Adjust guideline parameters File organization issues: Check folder permissions and naming conventions Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations Digital asset management best practices Brand compliance and governance guidelines
by Samuel Heredia
Data Extraction from MongoDB Overview This workflow exposes a public HTTP GET endpoint to read all documents from a MongoDB collection, with: Strict validation of the collection name Error handling with proper 4xx codes Response formatting (e.g., _id → id) and a consistent 2XX JSON envelope Workflow Steps Webhook Trigger: *A public GET endpoint receives requests with the collection name as a parameter*. The workflow begins with a webhook that listens for incoming HTTP GET requests. The endpoint follows this pattern: https://{{your-n8n-instance}}/webhook-test/{{uuid>}}/:nameCollection The :nameCollection parameter is passed directly in the URL and specifies the MongoDB collection to be queried. Example: https://yourdomain.com/webhook-test/abcd1234/orders would attempt to fetch all documents from the orders collection. Validation: *The collection name is checked against a set of rules to prevent invalid or unsafe queries*. Before querying the database, the collection name undergoes validation using a regular expression: ^(?!system\.)[a-zA-Z0-9._]{1,120}$ Purpose of validation: Blocks access to MongoDB’s reserved system.* collections. Prevents injection attacks by ensuring only alphanumeric characters, underscores, and dots are allowed. Enforces MongoDB’s length restrictions (max 120 characters). This step ensures the workflow cannot be exploited with malicious input. Conditional Check: *If the validation fails, the workflow stops and returns an error message. If it succeeds, it continues.* The workflow checks if the collection name passes validation. If valid ✅: proceeds to query MongoDB. If invalid ❌: immediately returns a structured HTTP 400 response, adhering to RESTful standards: { "code": 400, "message": "{{ $json.message }}" } MongoDB Query: *The workflow connects to MongoDB and retrieves all documents from the specified collection.* To use the MongoDB node, a proper database connection must be configured in n8n. This is done through MongoDB Credentials in the node settings: Create MongoDB Credentials in n8n Go to n8n → Credentials → New. Select MongoDB and Fill in the following fields: Host: The MongoDB server hostname or IP (e.g., cluster0.mongodb.net). Port: Default is 27017 for local deployments. Database: Name of the database (e.g., myDatabase). User: MongoDB username with read permissions. Password: Corresponding password. Connection Type: Standard for most cases, or Connection String if using a full URI. Replica Set / SRV Record: Enable if using MongoDB Atlas or a replica cluster. Using a Connection String (recommended for MongoDB Atlas) Example URI: mongodb+srv://<username>:<password>@cluster0.mongodb.net/myDatabase?retryWrites=true&w=majority Paste this into the Connection String field when selecting "Connection String" as the type. Verify the Connection After saving, test the credentials to confirm n8n can connect successfully to your MongoDB instance. Configure the MongoDB Node in the Workflow Operation: Find (to fetch documents). Collection: Dynamic value passed from the workflow (e.g., {{$json["nameCollection"]}}). Query: Leave empty to fetch all documents, or define filters if needed. Result: The MongoDB node will retrieve all documents from the specified collection and pass the dataset as JSON to the next node for processing. Data Formatting: *The retrieved documents are processed to adjust field names.* By default, MongoDB returns its unique identifier as _id. To align with common API conventions, this step renames _id → id. This small transformation simplifies downstream usage, making responses more intuitive for client applications. Response: *The cleaned dataset is returned as a structured JSON response to the original request.* The processed dataset is returned as the response to the original HTTP request. Clients receive a clean JSON payload with the expected format and renamed identifiers. Example response: [ { "id": "64f13c1e2f1a5e34d9b3e7f0", "name": "John Doe", "email": "john@example.com" }, { "id": "64f13c1e2f1a5e34d9b3e7f1", "name": "Jane Smith", "email": "jane@example.com" } ] Workflow Summary Webhook (GET) → Code (Validation) → IF (Validation Check) → MongoDB (Query) → Code (Transform IDs) → Respond to Webhook Key Benefits ✅ Security-first design: prevents unauthorized access or injection attacks. ✅ Standards compliance: uses HTTP status codes (400) for invalid requests. ✅ Clean API response: transforms MongoDB’s native _id into a more user-friendly id. ✅ Scalability: ready for integration with any frontend, third-party service, or analytics pipeline.
by David Olusola
🗂️ Auto-Create Airtable CRM Records for Zoom Attendees This workflow automatically logs every Zoom meeting attendee into an Airtable CRM — capturing their details for sales follow-up, reporting, or onboarding. ⚙️ How It Works Zoom Webhook → Captures participant join event. Normalize Data → Extracts attendee name, email, join/leave times. Airtable → Saves/updates record with meeting + contact info. 🛠️ Setup Steps 1. Zoom Create a Zoom App with meeting.participant_joined event. Paste workflow webhook URL. 2. Airtable Create a base called CRM. Table: Attendees. Columns: Meeting ID Topic Name Email Join Time Leave Time Duration Tag 3. n8n Replace YOUR_AIRTABLE_BASE_ID + YOUR_AIRTABLE_TABLE_ID in the workflow. Connect Airtable API key. 📊 Example Airtable Row | Meeting ID | Topic | Name | Email | Join Time | Duration | Tag | |------------|--------------|----------|--------------------|----------------------|----------|----------| | 999-123-456 | Sales Demo | Sarah L. | sarah@email.com | 2025-08-30T10:02:00Z | 45 min | New Lead | ⚡ With this workflow, every Zoom attendee becomes a structured CRM record automatically.