by Daniel
Generate stunning 10-second AI-crafted nature stock videos on autopilot and deliver them straight to your Telegram chat—perfect for content creators seeking effortless inspiration without the hassle of manual prompting or editing. 📋 What This Template Does This workflow automates the creation and delivery of high-quality, 10-second nature-themed videos using AI generation tools. Triggered on a schedule, it leverages Google Gemini to craft precise video prompts, submits them to the Kie AI API for video synthesis, polls for completion, downloads the result, and sends it via Telegram. Dynamically generates varied nature scenes (e.g., misty forests, ocean sunsets) with professional cinematography specs. Handles asynchronous video processing with webhook callbacks for efficiency. Ensures commercial-ready outputs: watermark-free, portrait aspect, natural ambient audio. Customizable schedule for daily/weekly bursts of creative B-roll footage. 🔧 Prerequisites n8n instance with HTTP Request and LangChain nodes enabled. Google Gemini API access for prompt generation. Kie AI API account for video creation (supports Sora-like text-to-video models). Telegram Bot setup for message delivery. 🔑 Required Credentials Google Gemini API Setup Go to aistudio.google.com → Create API key. Ensure the key has access to Gemini 1.5 Flash or Pro models. Add to n8n as "Google Gemini API" credential type. Kie AI API Setup Sign up at kie.ai → Dashboard → API Keys. Generate a new API key with video generation permissions (sora-2-text-to-video model). Add to n8n as "HTTP Header Auth" credential (header: Authorization, value: Bearer [Your API Key]). Telegram Bot API Setup Create a bot via @BotFather on Telegram → Get API token. Note your target chat ID (use @userinfobot for personal chats). Add to n8n as "Telegram API" credential type. ⚙️ Configuration Steps Import the workflow JSON into your n8n instance. Assign the required credentials to the Gemini, Kie AI, and Telegram nodes. Update the Telegram node's chat ID with your target chat (e.g., personal or group). Adjust the Schedule Trigger interval (e.g., daily at 9 AM) via node settings. Activate the workflow and monitor the first execution for video delivery. 🎯 Use Cases Content creators automating daily social media B-roll: Generate fresh nature clips for Instagram Reels or YouTube intros without filming. Marketing teams sourcing versatile stock footage: Quickly produce themed videos for campaigns, like serene landscapes for wellness brands. Educational bots for classrooms: Deliver randomized nature videos to Telegram groups for biology lessons on ecosystems and wildlife. Personal productivity: Schedule motivational nature escapes to your chat for remote workers needing quick digital breaks. ⚠️ Troubleshooting Video generation fails with quota error: Check Kie AI dashboard for usage limits and upgrade plan if needed. Prompt output too generic: Tweak the Video Prompting Agent's system prompt for more specificity (e.g., add seasonal themes). Telegram send error: Verify bot token and chat ID; test with a simple message node first. Webhook callback timeout: Ensure n8n production URL is publicly accessible; use ngrok for local testing.
by LeeWei
⚙️ Proposal Generator Template (Automates proposal creation from JotForm submissions) 🧑💻 Author: [LeeWei] 🚀 Steps to Connect: JotForm Setup Visit JotForm to generate your API key and connect to the JotForm Trigger node. Update the form field in the JotForm Trigger node with your form ID (default: 251206359432049). Google Drive Setup Go to Google Drive and set up OAuth2 credentials ("Google Drive account") with access to the folder containing your template. Update the fileId field in the Google Drive node with your template file ID (default: 1DSHUhq_DoM80cM7LZ5iZs6UGoFb3ZHsLpU3mZDuQwuQ). Update the name field in the Google Drive node with your desired output file name pattern (default: ={{ $json['Company Name'] }} | Ai Proposal). OpenAI Setup Visit OpenAI and generate your API key. Paste this key into the OpenAI and OpenAI1 nodes under the "OpenAi account 3" credentials. Update the modelId field in the OpenAI1 node if needed (default: gpt-4.1-mini). Google Docs Setup Set up OAuth2 credentials ("Google Docs account") with edit permissions for the generated documents. No fields need editing as the node dynamically updates based on previous outputs. Google Drive2 Setup Ensure the same Google Drive credentials ("Google Drive account") are used. No fields need editing as the node handles PDF conversion automatically. Gmail Setup Go to Gmail and set up OAuth2 credentials ("Gmail account"). No fields need editing as the node dynamically uses the prospect's email from JotForm. How it works The workflow triggers on JotForm submissions, copies a Google Drive template, downloads an audio call link, transcribes it with OpenAI, generates a tailored proposal, updates a Google Docs file, converts it to PDF, and emails it to the prospect. Set up steps Setup time: Approximately 15-20 minutes. Detailed instructions are available in sticky notes within the workflow.
by David Olusola
🧹 Auto-Clean CSV Uploads Before Import This workflow automatically cleans, validates, and standardizes any CSV file you upload. Perfect for preparing customer lists, sales leads, product catalogs, or any messy datasets before pushing them into Google Sheets, Google Drive, or other systems. ⚙️ How It Works CSV Upload (Webhook) Upload your CSV via webhook (supports form-data, base64, or binary file upload). Handles files up to ~10MB comfortably. Extract & Parse Reads raw CSV content. Validates file structure and headers. Detects and normalizes column names (e.g. First Name → first_name). Clean & Standardize Data Removes duplicate rows (based on email or all fields). Deletes empty rows. Standardizes fields: Emails → lowercased, validated format. Phone numbers → normalized (xxx) xxx-xxxx or +1 format. Names → capitalized (John Smith). Text → trims spaces & fixes inconsistent spacing. Assigns each row a data quality score so you know how “clean” it is. Generate Cleaned CSV Produces a cleaned CSV file with the same headers. Saves to Google Drive (optional). Ready for immediate import into Sheets or any app. Google Sheets Integration (Optional) Clears out an existing sheet. Re-imports the cleaned rows. Perfect for always keeping your “master sheet” clean. Final Report Logs processing summary: Rows before & after cleaning. Duplicates removed. Low-quality rows removed. Average data quality score. Outputs a neat summary for auditing. 🛠️ Setup Steps Upload Method Use the webhook endpoint generated by the CSV Upload Webhook node. Send CSV via binary upload, base64 encoding, or JSON payload with csv_content. Google Drive (Optional) Connect your Drive OAuth credentials. Replace YOUR_DRIVE_FOLDER_ID with your target folder. Google Sheets (Optional) Connect Google Sheets OAuth. Replace YOUR_GOOGLE_SHEET_ID with your target sheet ID. Customize Cleaning Rules Adjust the Clean & Standardize Data code node if you want different cleaning thresholds (default = 30% minimum data quality). 📊 Example Cleaning Report Input file: raw_leads.csv Rows before: 2,450 Rows after cleaning: 1,982 Duplicates removed: 210 Low-quality rows removed: 258 Avg. data quality: 87% ✅ Clean CSV saved to Drive ✅ Clean data imported into Google Sheets ✅ Full processing report generated 🎯 Why Use This? Stop wasting time manually cleaning CSVs. Ensure high-quality, import-ready data every time. Works with any dataset: leads, contacts, e-commerce exports, logs, surveys. Completely free — a must-have utility in your automation toolbox. ⚡ Upload dirty CSV → Get clean, validated, standardized data instantly!
by Rohit Dabra
Shopify MCP AI Agent Workflow for n8n Overview This n8n workflow showcases a full-featured AI-powered assistant connected to a Shopify store through a custom MCP (Multi-Channel Commerce Platform) Server toolkit. It empowers users to automate comprehensive Shopify store management by leveraging AI to interact conversationally with their data and operations. The workflow can create, fetch, search, update, and delete Shopify products and orders, all triggered via simple chat messages, making day-to-day store operations frictionless and highly efficient. Core capabilities include: Product and order management (CRUD) via chat commands. Smart retrieval: AI proactively fetches details instead of asking repeated questions. Contextual memory: AI uses n8n memory to provide context-aware, fluent responses. End-to-end automation: Connects Shopify, OpenAI, and n8n’s automation logic for seamless workflows. This solution is ideal for Shopify merchants, agencies, and developers aiming to reduce manual overhead and enable conversational, AI-powered commerce automation in their operations. 🎬 Watch Demo Video on YouTube Step-by-Step Setup Guide Follow these steps to import and configure the Shopify MCP AI Agent workflow in n8n: 1. Import the Workflow File Download the workflow file from this Creator Hub listing. In your n8n instance, go to Workflows > Import from File and upload the JSON. 2. Prepare Shopify Access Log in to your Shopify admin. Create a Custom App or use an existing app and retrieve the Admin API Access Token. Storefront access: Ensure your app has relevant permissions for Products, Orders, Discounts, and Store Settings. 3. Set Up Credentials in n8n In n8n, navigate to Credentials and add a new Shopify API credential using your Access Token. Name it something memorable (e.g., Shopify Access Token account) to match the credential used in the workflow nodes. 4. Configure the MCP Server Connection Make sure your MCP Server is running and accessible with API endpoints for product/order management. Update any relevant connection endpoints in the workflow if you run your MCP Server locally or in a different location. 5. Connect OpenAI or Other LLM Provider Provide your API key for OpenAI GPT or a compatible model. Link the credential to the OpenAI Chat Model node (replace with other providers if required). 6. (Optional) Customize for Your Needs Tweak node logic, add new triggers, or extend memory features as required. Add, remove, or restrain the AI’s capabilities to fit your operational needs. Configure chat triggers for more personalized workflows. 7. Testing Use the “When chat message received” trigger or send http requests to the workflow’s endpoint. Example: “Create an order for Jane Doe, 3 Black T-shirts” or “Show today’s fulfilled orders”. The workflow and AI Agent will handle context, fetch/store data, and reply accordingly. 8. Ready to Automate! Begin leveraging conversational automation to manage your Shopify store. For additional tips, consult the workflow’s internal documentation and n8n’s official guides. Additional Notes This template includes all core Shopify product and order operations. The AI agent auto-resolves context, making routine admin tasks simple and quick. Extend or fork the workflow to suit niche scenarios—discounts, analytics, and more. Visual thumbnail and schematic are included for easy reference.
by Alexandra Spalato
Short Description This LinkedIn automation workflow monitors post comments for specific trigger words and automatically sends direct messages with lead magnets to engaged users. The system checks connection status, handles non-connected users with connection requests, and prevents duplicate outreach by tracking all interactions in a database. Key Features Comment Monitoring**: Scans LinkedIn post comments for customizable trigger words Connection Status Check**: Determines if users are 1st-degree connections Automated DMs**: Sends personalized messages with lead magnet links to connected users Connection Requests**: Asks non-connected users to connect via comment replies Duplicate Prevention**: Tracks interactions in NocoDB to avoid repeat messages Message Rotation**: Uses different comment reply variations for authenticity Batch Processing**: Handles multiple comments with built-in delays Who This Workflow Is For Content creators looking to convert post engagement into leads Coaches and consultants sharing valuable LinkedIn content Anyone wanting to automate lead capture from LinkedIn posts How It Works Setup: Configure post ID, trigger word, and lead magnet link via form Comment Extraction: Retrieves all comments from the specified post using Unipile Trigger Detection: Filters comments containing the specified trigger word Connection Check: Determines if commenters are 1st-degree connections Smart Routing: Connected users receive DMs, others get connection requests Database Logging: Records all interactions to prevent duplicates Setup Requirements Required Credentials Unipile API Key**: For LinkedIn API access NocoDB API Token**: For database tracking Database Structure **Table: leads linkedin_id: LinkedIn user ID name: User's full name headline: LinkedIn headline url: Profile URL date: Interaction date posts_id: Post reference connection_status: Network distance dm_status: Interaction type (sent/connection request) Customization Options Message Templates**: Modify DM and connection request messages Trigger Words**: Change the words that activate the workflow Timing**: Adjust delays between messages (8-12 seconds default) Reply Variations**: Add more comment reply options for authenticity Installation Instructions Import the workflow into your n8n instance Set up NocoDB database with required table structure Configure Unipile and NocoDB credentials Set environment variables for Unipile root URL and LinkedIn account ID Test with a sample post before full use
by JinPark
🧩 Summary Easily digitize and organize your business cards! This workflow allows you to upload a business card image, automatically extract contact information using Google Gemini’s OCR & vision model, and save the structured data into a Notion database — no manual typing required. Perfect for teams or individuals who want to centralize client contact info in Notion after networking events or meetings. ⚙️ How it works Form Submission Upload a business card image (.jpg, .png, or .jpeg) through an n8n form. Optionally select a category (e.g., Partner, Client, Vendor). AI-Powered OCR (Google Gemini) The uploaded image is sent to Google Gemini Vision for intelligent text recognition and entity extraction. Gemini returns structured text data such as: { "Name": "Jung Hyun Park", "Position": "Head of Development", "Phone": "021231234", "Mobile": "0101231234", "Email": "abc@dc.com", "Company": "TOV", "Address": "6F, Donga Building, 212, Yeoksam-ro, Gangnam-gu, Seoul", "Website": "www.tov.com" } JSON Parsing & Cleanup The text response from Gemini is cleaned and parsed into a valid JSON object using a Code node. Save to Notion The parsed data is automatically inserted into your Notion database (Customer Business Cards). Fields such as Name, Email, Phone, Address, and Company are mapped to Notion properties. 🧠 Used Nodes Form Trigger** – Captures uploaded business card and category input Google Gemini (Vision)** – Extracts contact details from the image Code** – Parses Gemini’s output into structured JSON Notion** – Saves extracted contact info to your Notion database 📦 Integrations | Service | Purpose | Node Type | |----------|----------|-----------| | Google Gemini (PaLM) | Image-to-text extraction (OCR + structured entity parsing) | @n8n/n8n-nodes-langchain.googleGemini | | Notion | Contact data storage | n8n-nodes-base.notion | 🧰 Requirements A connected Google Gemini (PaLM) API credential A Notion integration with edit access to your database 🚀 Example Use Cases Digitize stacks of collected business cards after a conference Auto-save new partner contacts to your CRM database in Notion Build a searchable Notion-based contact directory Combine with Notion filters or rollups to manage client relationships 💡 Tips You can easily extend this workflow by adding an email notification node to confirm successful uploads. For multilingual cards, Gemini Vision handles mixed-language text recognition well. Adjust Gemini model (gemini-1.5-flash or gemini-1.5-pro) based on your accuracy vs. speed needs. 🧾 Template Metadata | Field | Value | |-------|--------| | Category | AI + Notion + OCR | | Difficulty | Beginner–Intermediate | | Trigger Type | Form Submission | | Use Case | Automate business card digitization | | Works with | Google Gemini, Notion |
by Nalin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Complete account-based outreach automation with Octave context engine Who is this for? Revenue teams, account-based marketing professionals, and growth operators who want a complete, automated pipeline from account identification to contextualized outreach. Built for teams ready to move beyond fragmented point solutions to an integrated, context-aware GTM engine. What problem does this solve? Most GTM teams are flying blind with disconnected tools that can't talk to each other. You qualify accounts in one system, find contacts in another, research context manually, then hope your email sequences land. Each step loses context, and by the time you're writing outreach, you've forgotten why the account was qualified in the first place. Octave centralizes all this typically fragmented context - your ICP definitions, personas, value propositions, and business logic - so every agent operation can act on the same unified understanding of your market. This workflow demonstrates how Octave's agents work together seamlessly because they all share the same context foundation. What this workflow does Complete Account-to-Outreach Pipeline: This workflow demonstrates the full power of Octave's context engine by connecting multiple agent operations in a seamless flow. Unlike traditional tools that lose context at each handoff, Octave centralizes your business context - ICP definitions, personas, value propositions, competitive positioning - so every agent operates from the same unified understanding of your market. External Context Research: Gathers real-time external data about target accounts (job postings, news, funding, etc.) Processes this information to create runtime context for later use in outreach Establishes the "why reach out now" foundation for the entire workflow Company-Level Qualification: Uses Octave's company qualification to assess account fit against your specific offering Leverages Product and Segment-level fit criteria defined in your Library Filters out accounts that don't meet your qualification thresholds Ensures only high-potential accounts proceed through the workflow Intelligent Contact Discovery: Runs Octave's prospector agent on qualified accounts Finds relevant stakeholders based on responsibilities and business context, not just job titles Discovers multiple contacts per account for comprehensive coverage Maintains qualification context when identifying the right people Runtime Context Integration: Takes the external context gathered at the beginning and injects it into sequence generation Creates truly dynamic, timely outreach that references current company events Generates sequences that feel impossibly relevant and well-researched Multi-Contact Sequence Generation: Splits discovered contacts into individual records for processing Generates contextualized email sequences for each contact Maintains account-level context while creating contact-specific messaging Produces sequences (1-7 emails) that feel unmistakably meant for each person Automated Campaign Deployment: Automatically adds all qualified contacts with their contextualized sequences to email campaigns Maps dynamic content to campaign variables for seamless execution Maintains the context chain from qualification through delivery Setup Required Credentials: Octave API key and workspace access External data source API (job boards, news APIs, enrichment services, etc.) Email platform API key (Instantly.ai configured, easily adaptable) Optional: LLM credentials if using the example external research agent Step-by-Step Configuration: Set up Account Input Source: Replace your-webhook-path-here with a unique, secure path Configure your account source (CRM, website visitors, target lists) to send company data Ensure account data includes company name and domain for processing Configure External Context Research: Replace the example AI agent with your preferred external data source Set up connections to job boards, news APIs, or enrichment services Configure context gathering to find timely, relevant information about target accounts Set up Company Qualification Agent: Add your Octave API credentials Replace your-octave-company-qualification-agent-id with your actual agent ID Configure qualification criteria at Product and Segment levels in your Octave Library Configure Prospector Agent: Replace your-octave-prospector-agent-id with your actual prospector agent ID Define target personas and stakeholder types in your Octave Library Set contact discovery parameters for optimal coverage Set up Sequence Agent: Replace your-octave-sequence-agent-id with your actual sequence agent ID Configure runtime context integration for dynamic content Test sequence quality with the external context integration Configure Email Campaign Platform: Add your email platform API credentials Replace your-campaign-id-here with your actual campaign ID Ensure campaign supports custom variables for dynamic content Required Webhook Payload Format: { "body": { "companyName": "InnovateTech Solutions", "companyDomain": "innovatetech.com" } } How to customize External Context Sources: Replace the example research with your data sources: Job Board APIs:** Reference current hiring and team expansion News APIs:** Mention funding, product launches, or market expansion Enrichment Services:** Pull technology adoption, market changes, or competitive moves Social Monitoring:** Reference recent company posts or industry discussions Company Qualification: Configure qualification in your Octave company qualification agent: Product Level:** Define "good fit" and "bad fit" questions for your core offering Segment Level:** Set criteria for different market segments or use cases Qualification Thresholds:** Adjust the filter score based on your standards Contact Discovery: Customize prospecting in your Octave prospector agent: Target Personas:** Define which Library personas to prioritize Organizational Levels:** Focus on specific seniority levels or decision-making authority Contact Volume:** Adjust how many contacts to discover per qualified account Runtime Context Integration: Configure dynamic content injection: Context Definition:** Specify what external data represents in your sequences Usage Instructions:** Define how to incorporate context into messaging Email-Level Control:** Apply different context to different emails in sequences Sequence Generation: Customize email creation: Core Context (Library):** Define personas, use cases, and offering definitions Strategy (Playbooks):** Configure messaging frameworks and value propositions Writing Style (Agent):** Adjust tone, voice, and communication approach Campaign Integration: Adapt for different email platforms: Update API endpoints and authentication for your preferred platform Modify variable mapping for platform-specific requirements Adjust sequence formatting and length based on platform capabilities Use Cases Complete inbound lead processing from website visitor to qualified outreach Event-triggered account processing for funding announcements or hiring spikes Competitive displacement campaigns with account qualification and contact discovery Market expansion automation for entering new territories or segments Product launch outreach with context-aware targeting and messaging Customer expansion workflows for upselling within existing account bases
by Nalin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Generate dynamic email sequences with runtime context and external data Who is this for? Growth teams, sales development reps, and outbound marketers who want to reference specific, real-time information about prospects in their email sequences. Built for teams that have access to external data sources and want to create truly contextualized outreach that feels impossibly relevant. What problem does this solve? Most outbound sequences are static - they use the same messaging for everyone regardless of what's actually happening at the prospect's company right now. You might know they're hiring, launched a product, got funding, or expanded to new markets, but your email sequences can't dynamically reference these timely events. This workflow shows how to inject real-time external context into Octave's sequence generation, creating outreach that feels like you're personally monitoring each prospect's company. What this workflow does Lead Data & Context Collection: Receives lead information via webhook (firstName, companyName, companyDomain, profileURL, jobTitle) Uses external data sources to gather timely context about the prospect's company Example: AI agent researches current job postings to find roles they're actively hiring for Processes this context into structured data for sequence generation Runtime Context Integration: Feeds external context into Octave's sequence generation as "runtime context" Defines both WHAT the context is ("they are hiring a software engineer") and HOW to use it ("mention the role in the opening") Allows Octave to weave timely, relevant details into each email naturally Creates sequences that feel like personal research rather than mass outreach Dynamic Sequence Generation: Leverages Octave's context engine plus runtime data to create hyper-relevant sequences (1-7 emails) Generates subject lines and email content that reference specific, current company context Maintains your positioning and value prop while incorporating timely relevance Creates messaging that feels unmistakably meant for that specific moment in the prospect's business Campaign Integration: Automatically adds leads with contextualized sequences to your email platform Maps generated content to campaign variables for automated sending Supports multiple email platforms with easy customization Setup Required Credentials: Octave API key and workspace access External data source API (job boards, news APIs, enrichment services, etc.) Email platform API key (Instantly.ai configured, easily adaptable) Optional: LLM credentials if using the example AI agent for testing Step-by-Step Configuration: Set up External Data Source: Replace the AI Agent with your preferred data source (job board APIs, news APIs, company databases) Configure data collection to find relevant, timely information about prospects Structure the output to provide clean context for sequence generation Set up Octave Sequence Agent: Add your Octave API credentials in n8n Replace your-octave-sequence-agent-id with your actual sequence agent ID Configure runtime context parameters: Runtime Context: Define WHAT the external data represents Runtime Instructions: Specify HOW to use it in the messaging Configure Email Platform: Add your email platform API credentials Replace your-campaign-id-here with your actual campaign ID Ensure campaign supports custom variables for dynamic content Set up Lead Source: Replace your-webhook-path-here with a unique, secure path Configure your lead source to send prospect data to the webhook Test end-to-end flow with sample leads Required Webhook Payload Format: { "body": { "firstName": "Alex", "lastName": "Chen", "companyName": "InnovateTech", "companyDomain": "innovatetech.com", "profileURL": "https://linkedin.com/in/alexchen", "email": "alex@innovatetech.com", "jobTitle": "VP of Engineering" } } How to customize External Data Sources: Replace the AI agent with your preferred context collection method: Job Board APIs:** Reference current hiring needs and team expansion News APIs:** Mention recent company announcements, funding, or product launches Social Media Monitoring:** Reference recent LinkedIn posts, company updates, or industry discussions Enrichment Services:** Pull real-time company data, technology stack changes, or market expansion Runtime Context Configuration: Customize how external data integrates with sequences: Context Definition:** Specify what the external data represents ("they just raised Series B funding") Usage Instructions:** Define how to incorporate it ("mention the funding in the opening and tie it to growth challenges") Email-Level Control:** Configure different context usage for different emails in the sequence Global vs. Specific:** Apply context to all emails or target specific messages Data Processing: Replace the example AI agent with your external data processing: Modify data source connections to pull relevant context Ensure consistent output formatting for runtime context integration Add error handling for cases where external data isn't available Implement fallback context for prospects without relevant external data Sequence Customization: Configure Octave sequence generation: Core Context (Library):** Define your personas, use cases, and offering definitions Strategy (Playbooks):** Configure messaging frameworks and value proposition delivery Writing Style (Agent):** Adjust tone, voice, and communication style Email Platform Integration: Adapt for different email sequencing platforms: Update API endpoints and authentication for your preferred platform Modify variable mapping for platform-specific custom fields Adjust sequence length and formatting requirements Use Cases Job posting-triggered outreach for hiring managers and HR teams Funding announcement follow-ups for growth-stage companies Product launch congratulations with relevant use case discussions Market expansion outreach when companies enter new territories Technology adoption sequences based on recent stack additions Event attendance follow-ups with session-specific references
by Growth AI
Google Ads automated reporting to spreadsheets with Airtable Who's it for Digital marketing agencies, PPC managers, and marketing teams who manage multiple Google Ads accounts and need automated monthly performance reporting organized by campaign types and conversion metrics. What it does This workflow automatically retrieves Google Ads performance data from multiple client accounts and populates organized spreadsheets with campaign metrics. It differentiates between e-commerce (conversion value) and lead generation (conversion count) campaigns, then organizes data by advertising channel (Performance Max, Search, Display, etc.) with monthly tracking for budget and performance analysis. How it works The workflow follows an automated data collection and reporting process: Account Retrieval: Fetches client information from Airtable (project names, Google Ads IDs, campaign types) Active Filter: Processes only accounts marked as "Actif" for budget reporting Campaign Classification: Routes accounts through e-commerce or lead generation workflows based on "Typologie ADS" Google Ads Queries: Executes different API calls depending on campaign type (conversion value vs. conversion count) Data Processing: Organizes metrics by advertising channel (Performance Max, Search, Display, Video, Shopping, Demand Gen) Dynamic Spreadsheet Updates: Automatically fills the correct monthly column in client spreadsheets Sequential Processing: Handles multiple accounts with wait periods to avoid API rate limits Requirements Airtable account with client database Google Ads API access with developer token Google Sheets API access Client-specific spreadsheet templates (provided) How to set up Step 1: Prepare your reporting template Copy the Google Sheets reporting template Create individual copies for each client Ensure proper column structure (months B-M for January-December) Link template URLs in your Airtable database Step 2: Configure your Airtable database Set up the following fields in your Airtable: Project names: Client project identifiers ID GADS: Google Ads customer IDs Typologie ADS: Campaign classification ("Ecommerce" or "Lead") Status - Prévisionnel budgétaire: Account status ("Actif" for active accounts) Automation budget: URLs to client-specific reporting spreadsheets Step 3: Set up API credentials Configure the following authentication: Airtable Personal Access Token: For client database access Google Ads OAuth2: For advertising data retrieval Google Sheets OAuth2: For spreadsheet updates Developer Token: Required for Google Ads API access Login Customer ID: Manager account identifier Step 4: Configure Google Ads API settings Update the HTTP request nodes with your credentials: Developer Token: Replace "[Your token]" with your actual developer token Login Customer ID: Replace "[Your customer id]" with your manager account ID API Version: Currently using v18 (update as needed) Step 5: Set up scheduling Default schedule: Runs on the 3rd of each month at 5 AM Cron expression: 0 5 3 * * Recommended timing: Early month execution for complete previous month data Processing delay: 1-minute waits between accounts to respect API limits How to customize the workflow Campaign type customization E-commerce campaigns: Tracks: Cost and conversion value metrics Query: metrics.conversions_value for revenue tracking Use case: Online stores, retail businesses Lead generation campaigns: Tracks: Cost and conversion count metrics Query: metrics.conversions for lead quantity Use case: Service businesses, B2B companies Advertising channel expansion Current channels tracked: Performance Max: Automated campaign type Search: Text ads on search results Display: Visual ads on partner sites Video: YouTube and video partner ads Shopping: Product listing ads Demand Gen: Audience-focused campaigns Add new channels by modifying the data processing code nodes. Reporting period adjustment Current setting: Last month data (DURING LAST_MONTH) Alternative periods: Last 30 days, specific date ranges, quarterly reports Custom timeframes: Modify the Google Ads query date parameters Multi-account management Sequential processing: Handles multiple accounts automatically Error handling: Continues processing if individual accounts fail Rate limiting: Built-in waits prevent API quota issues Batch size: No limit on number of accounts processed Data organization features Dynamic monthly columns Automatic detection: Determines previous month column (B-M) Column mapping: January=B, February=C, ..., December=M Data placement: Updates correct month automatically Multi-year support: Handles year transitions seamlessly Campaign performance breakdown Each account populates 10 rows of data: Performance Max Cost (Row 2) Performance Max Conversions/Value (Row 3) Demand Gen Cost (Row 4) Demand Gen Conversions/Value (Row 5) Search Cost (Row 6) Search Conversions/Value (Row 7) Video Cost (Row 8) Video Conversions/Value (Row 9) Shopping Cost (Row 10) Shopping Conversions/Value (Row 11) Data processing logic Cost conversion: Automatically converts micros to euros (÷1,000,000) Precision rounding: Rounds to 2 decimal places for clean presentation Zero handling: Shows 0 for campaign types with no activity Data validation: Handles missing or null values gracefully Results interpretation Monthly performance tracking Historical data: Year-over-year comparison across all channels Channel performance: Identify best-performing advertising types Budget allocation: Data-driven decisions for campaign investments Trend analysis: Month-over-month growth or decline patterns Account-level insights Multi-client view: Consolidated reporting across all managed accounts Campaign diversity: Understanding which channels clients use most Performance benchmarks: Compare similar account types and industries Resource allocation: Focus on high-performing accounts and channels Use cases Agency reporting automation Client dashboards: Automated population of monthly performance reports Budget planning: Historical data for next month's budget recommendations Performance reviews: Ready-to-present data for client meetings Trend identification: Spot patterns across multiple client accounts Internal performance tracking Team productivity: Track account management efficiency Campaign optimization: Identify underperforming channels for improvement Growth analysis: Monitor client account growth and expansion Forecasting: Use historical data for future performance predictions Strategic planning Budget allocation: Data-driven distribution across advertising channels Channel strategy: Determine which campaign types to emphasize Client retention: Proactive identification of declining accounts New business: Performance data to support proposals and pitches Workflow limitations Monthly execution: Designed for monthly reporting (not real-time) API dependencies: Requires stable Google Ads and Sheets API access Rate limiting: Sequential processing prevents parallel account handling Template dependency: Requires specific spreadsheet structure for proper data placement Previous month focus: Optimized for completed month data (run early in new month) Manual credential setup: Requires individual configuration of API tokens and customer IDs
by Nansen
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This workflow listens for an incoming chat message and routes it to an AI Agent. The agent is powered by your preferred Chat Model (such as OpenAI or Anthropic) and extended with the Nansen MCP tool, which enables it to retrieve onchain wallet data, token movements, and address-level insights in real time. The Nansen MCP tool uses HTTP Streamable transport and requires API Key authentication via Header Auth. Read the Documentation: https://docs.nansen.ai/nansen-mcp/overview Set up steps Get your Nansen MCP API key Visit: https://app.nansen.ai/account?tab=api Generate and copy your personal API key. Create a credential for authentication From the homepage, click the dropdown next to "Create Workflow" → "Create Credential". Select Header Auth as the method. Set the Header Name to: NANSEN-API-KEY Paste your API key into the Value field. Save the credential (e.g., Nansen MCP Credentials). Configure the Nansen MCP tool Endpoint: https://mcp.nansen.ai/ra/mcp/ Server Transport: HTTP Streamable Authentication: Header Auth Credential: Select Nansen MCP Credentials Tools to Include: Leave as All (or restrict as needed) Configure the AI Agent Connect your preferred Chat Model (e.g., OpenAI, Anthropic) to the Chat Model input. Connect the Nansen MCP tool to the Tool input. (Optional) Add a Memory block to preserve conversational context. Set up the chat trigger Use the "When chat message received" node to start the flow when a message is received. Test your setup Try sending prompts like: What tokens are being swapped by 0xabc...123? Get recent wallet activity for this address. Show top holders of token XYZ.
by SpaGreen Creative
Send Automatic WhatsApp Order Confirmations from Shopify with Rapiwa API Who’s it for This n8n workflow helps Shopify store owners and teams automatically confirm orders via WhatsApp. It checks if the customer's number is valid using Rapiwa API, sends a personalized message, and logs every attempt in Google Sheets—saving time and reducing manual work. Whether you're a solo entrepreneur or managing a small team, this solution gives you a low-cost alternative to the official WhatsApp Business API, without losing control or personalization. Features Receives new order details via webhook upon order creation or update. Iterates over incoming data in manageable batches for smoother processing. Extracts and formats customer and order details from the Shopify webhook payload. Strips non-numeric characters from WhatsApp numbers for consistent formatting. Uses Rapiwa API to check if the WhatsApp number is valid and active. Branches the workflow based on number validity — separates verified from unverified. Sends a custom WhatsApp confirmation message to verified customers using Rapiwa. Updates Google Sheet rows with status and validity How it Works / What It Does Triggered by a Shopify webhook or by reading rows from a Google Sheet. Normalizes and cleans** the order payload. Extracts details like customer name, phone, items, shipping, and payment info. Cleans phone numbers (removes special characters). Verifies if the number is registered on WhatsApp via Rapiwa API. If valid: Sends a templated WhatsApp message. Updates Google Sheet with validity = verified and status = sent. If invalid: Skips sending. Updates sheet with validity = unverified and status = not sent. Adds wait/delay between sends to prevent rate limits. Keeps an audit trail in the connected Google Sheet. How to Set Up Set up a Shopify webhook for new orders (or connect a Google Sheet). Create a Google Sheet with columns: name, number, order id, item name, total price, validity, status Create and configure a Rapiwa Bearer token in n8n. Add Google Sheets OAuth2 credential in n8n. Import the workflow in n8n and configure these nodes: Webhook or Sheet Trigger Loop Over Items (SplitInBatches) Normalize Payload (Code) Clean WhatsApp Number (Code) Rapiwa WhatsApp Check (HTTP Request) Conditional Branch (If) Send WhatsApp Message (HTTP Request) Update Google Sheet (Google Sheets) Wait Node (delay per send) Requirements Shopify store with order webhook enabled (or order list in Google Sheet) A verified Rapiwa API token A working n8n instance with HTTP and Google Sheets nodes enabled A Google Sheet with required structure and valid OAuth credentials in n8n How to Customize the Workflow Modify the message template with your own brand tone or emojis. Add country-code logic in the Clean Number node if needed. Use a unique order id in your Google Sheet to prevent mismatches. Increase or decrease delay in the Wait node (e.g., 5–10 seconds). Use additional logic in Code nodes to handle discounts, promotions, or more line items. Workflow Highlights Triggered by Shopify webhook update. Receiving new order data form Shopify using webhook Cleans and extracts order data from raw payload. Normalizing and validating the customer’s WhatsApp number using the Rapiwa API Verifies WhatsApp number using Rapiwa's verify-whatsapp endpoint. Sends order confirmation via Rapiwa's send-message endpoint. Logs every result into Google Sheets (verified/unverified + sent/not sent). Setup in n8n 1. Check WhatsApp Registration Use an HTTP Request node: URL: https://app.rapiwa.com/api/verify-whatsapp Method: POST Auth: httpBearerAuth using your Rapiwa token Body: { "number": "cleaned_number" } 2. Branch Based on Validity Use an If node: Condition: {{ $json.data.exists }} == true (or "true" if string) 3. Send Message via Rapiwa Endpoint: https://app.rapiwa.com/api/send-message Method: POST Body: Hi {{ $json.customer_full_name }}, Thank you for shopping with SpaGreen Creative! We're happy to confirm that your order has been successfully placed. 🧾 Order Details • Product: {{ $json.line_item.title }} • SKU: {{ $json.line_item.sku }} • Quantity: {{ $json.line_item.quantity }} • Vendor: {{ $json.line_item.vendor }} • Order ID: {{ $json.name }} • Product ID: {{ $json.line_item.product_id }} 📦 Shipping Information {{ $json.shipping_address.address1 }} {{ $json.shipping_address.address2 }} {{ $json.shipping_address.city }}, {{ $json.shipping_address.country }} - {{ $json.shipping_address.zip }} 💳 Payment Summary • Subtotal: {{ $json.subtotal_price }} BDT • Tax (VAT): {{ $json.total_tax_amount }} BDT • Shipping: {{ $json.total_shipping_amount }} BDT • Discount: {{ $json.total_discount_amount }} BDT • Total Paid: {{ $json.total_price }} BDT Order Date: {{ $json.created_date }} Warm wishes, Team SpaGreen Creative Sample Google Sheet Structure A Google Sheet** formatted like this ➤ Sample | name | number | order id | item name | total price | validity | status | | ----------- | ------------- | ------------- | ------------------------------ | ----------- | -------- | ------ | | Abdul Mannan | 8801322827799| 8986469695806 | Iphone 10 | 1150 | verified | sent | | Abdul Mannan | 8801322827799| 8986469695806 | S25 UltraXXXXeen Android Phone | 23000 | verified | sent | Tips Always ensure phone numbers have a country code (e.g., 880 for BD). Clean numbers with regex: replace(/\D/g, '') Adjust Rapiwa API response parsing depending on actual structure (true vs "true"). Use row_number for sheet updates, or unique order id for better targeting. Use the Wait node to add 3–10 seconds between sends. Important Notes Avoid reordering sheet rows—updates rely on consistent row_number. shopify-app-auth is the credential name used in the export—make sure it's your Rapiwa token. Use a test sheet before going live. Rapiwa has request limits—avoid rapid sending. Add media/image logic later using message_type: media. Future Enhancements (Ideas) Add Telegram/Slack alert once the batch finishes. Include media (e.g., product image, invoice) in the message. Detect and resend failed messages. Integrate with Shopify’s GraphQL API for additional data. Auto-mark fulfillment status based on WhatsApp confirmation. Support & Community WhatsApp: 8801322827799 Discord: discord Facebook Group: facebook group Website: https://spagreen.net Envato/Codecanyon: codecanyon portfolio
by Evoort Solutions
📺 Automated YouTube Video Metadata Extraction Workflow Description: This workflow leverages the YouTube Metadata API to automatically extract detailed video information from any YouTube URL. It uses n8n to automate the entire process and stores the metadata in a neatly formatted Google Docs document. Perfect for content creators, marketers, and researchers who need quick, organized YouTube video insights at scale. ⚙️ Node-by-Node Explanation 1. ✅ On Form Submission This node acts as the trigger. When a user submits a form containing a YouTube video URL, the workflow is activated. Input: YouTube Video URL Platform: Webhook or n8n Form Trigger 2. 🌐 YouTube Metadata API (HTTP Request) This node sends the video URL to the YouTube Metadata API via HTTP request. Action: GET request Headers: -H "X-RapidAPI-Key: YOUR_API_KEY" -H "X-RapidAPI-Host: youtube-metadata1.p.rapidapi.com" Endpoint Example: https://youtube-metadata1.p.rapidapi.com/video?url=YOUTUBE_VIDEO_URL Output: JSON with metadata like: Title Description Views, Likes, Comments Duration Upload Date Channel Info Thumbnails 3. 🧠 Reformat Metadata (Code Node) This node reformats the raw metadata into a clean, human-readable text block. Example Output Format: 🎬 Title: How to Build Workflows with n8n 🧾 Description: This tutorial explains how to build... 👤 Channel: n8n Tutorials 📅 Published On: 2023-05-10 ⏱️ Duration: 10 minutes, 30 seconds 👁️ Views: 45,678 👍 Likes: 1,234 💬 Comments: 210 🔗 URL: https://youtube.com/watch?v=abc123 4. 📝 Append to Google Docs This node connects to your Google Docs and appends the formatted metadata into a selected document. Document Format Example:** 📌 Video Entry – [Date] 🎬 Title: 🧾 Description: 👤 Channel: 📅 Published On: ⏱️ Duration: 👁️ Views: 👍 Likes: 💬 Comments: 🔗 URL: --- 📄 Use Cases Content Creators**: Quickly analyze competitor content or inspirations. Marketers**: Collect campaign video performance data. Researchers**: Compile structured metadata across videos. Social Media Managers**: Create content briefs effortlessly. ✅ Benefits 🚀 Time-saving: Automates manual video data extraction 📊 Accurate: Uses reliable, updated YouTube API 📁 Organized: Formats and stores data in Google Docs 🔁 Scalable: Handles unlimited YouTube URLs 🎯 User-friendly: Simple setup and clean output 🔑 How to Get Your API Key for YouTube Metadata API Go to the YouTube Metadata API on RapidAPI. Sign up or log in to your RapidAPI account. Click Subscribe to Test and choose a pricing plan (free or paid). Copy your API Key shown in the "X-RapidAPI-Key" section. Use it in your HTTP request headers. 🧰 Google Docs Integration – Full Setup Instructions 🔐 Step 1: Enable Google Docs API Go to the Google Cloud Console. Create a new project or select an existing one. Navigate to APIs & Services > Library. Search for Google Docs API and click Enable. Also enable Google Drive API (for document access). 🛠 Step 2: Create OAuth Credentials Go to APIs & Services > Credentials. Click Create Credentials > OAuth Client ID. Select Web Application or Desktop App. Add authorized redirect URIs if needed (e.g., for n8n OAuth). Save your Client ID and Client Secret. 🔗 Step 3: Connect n8n to Google Docs In n8n, go to Credentials > Google Docs API. Add new credentials using the Client ID and Secret from above. Authenticate with your Google account and allow access. 📘 Step 4: Create and Format Your Google Document Go to Google Docs and create a new document. Name it (e.g., YouTube Metadata Report). Optionally, add a title or table of contents. Copy the Document ID from the URL: https://docs.google.com/document/d/DOCUMENT_ID/edit 🔄 Step 5: Use Append Content to Document Node in n8n Use the Google Docs node in n8n with: Operation: Append Content Document ID: Your copied Google Doc ID Content: The formatted video summary string 🎨 Customization Options 💡 Add Tags: Insert hashtags or categories based on video topics. 📆 Organize by Date: Create headers for each day or week’s entries. 📸 Embed Thumbnails: Use thumbnail_url to embed preview images. 📊 Spreadsheet Export: Use Google Sheets instead of Docs if preferred. 🛠 Troubleshooting Tips | Issue | Solution | | ------------------------------ | ------------------------------------------------------------------- | | ❌ Auth Error (Google Docs) | Ensure correct OAuth redirect URI and permissions. | | ❌ API Request Fails | Check API key and request structure; test on RapidAPI's playground. | | 📄 Doc Not Updating | Verify Document ID and sharing permissions. | | 🧾 Bad Formatting | Debug the code node output using logging or console in n8n. | | 🌐 n8n Timeout | Consider using Wait or Split In Batches for large submissions. | 🚀 Ready to Launch? You can deploy this workflow in just minutes using n8n. 👉 Start Automating with n8n