by Marth
Website Lead Notification System Let's build this simple and high-value workflow. Here is a detailed, node-by-node explanation of how it works and how to set it up in n8n. How It Works This workflow acts as a bridge between your website's contact form and your sales team. It waits for a submission from your website via a Webhook. As soon as a new lead fills out the form, the workflow instantly captures their data and sends a formatted notification to your team's Slack channel. This ensures your team can respond to new leads in real time, without any delays. Setup Steps 1. Webhooks Trigger: Receive Website Form Submissions Node Type:** Webhook Trigger Parameters:** HTTP Method: POST Path: new-lead Explanation:** This node is the starting point. It creates a unique URL that you will use in your website's form submission settings. When a visitor submits your form, the data is sent to this URL as a POST request, triggering the workflow. 2. Slack: Notify Sales Team Node Type:** Slack Credentials:** YOUR_SLACK_CREDENTIAL Parameters:** Operation: Post Message Channel: YOUR_SALES_CHANNEL_ID (e.g., #sales-leads) Text: `New Website Lead! - Name: {{ $json.name }} Company: {{ $json.company }} Email: {{ $json.email }} Message: {{ $json.message }}` Explanation:** This node sends a formatted message to your designated Slack channel. The curly braces {{ }} contain n8n expressions that dynamically pull the data (name, company, email, etc.) from the website form submission. 3. Google Sheets: Archive Lead Data (Optional) Node Type:** Google Sheets Credentials:** YOUR_GOOGLE_SHEETS_CREDENTIAL Parameters:** Operation: Add Row Spreadsheet ID: YOUR_SPREADSHEET_ID Sheet Name: Leads Data: Name: ={{ $json.name }} Email: ={{ $json.email }} Date: ={{ $now }} Explanation:** This is an optional but recommended step. This node automatically adds a new row to a Google Sheet, creating a clean, organized archive of all your website leads. 4. Gmail: Send Automatic Confirmation Email (Optional) Node Type:** Gmail Credentials:** YOUR_GMAIL_CREDENTIAL Parameters:** Operation: Send To: ={{ $json.email }} Subject: Thanks for contacting us! Body: Hi {{ $json.name }}, thanks for reaching out. We've received your message and will get back to you shortly. Explanation:** This node provides a quick and professional automated response to the new lead, confirming that their message has been received. Final Step: Activation After configuring the nodes, click "Save" at the top of the canvas. Click the "Active" toggle in the top-right corner. The workflow is now live and will listen for new form submissions. Remember: You need to configure your website's form to send a POST request to the URL from your Webhook Trigger node.
by Trung Tran
EC2 Lifecycle Manager with AI Chat Agent (Describe, Start, Stop, Reboot, Terminate) Watch the demo video below: Who’s it for This workflow is designed for DevOps engineers and cloud administrators who want to manage AWS EC2 instances directly from chat platforms (Slack, Teams, Telegram, etc.) using natural language. It helps engineers quickly check EC2 instance status, start/stop servers, reboot instances, or terminate unused machines — without logging into the AWS console. How it works / What it does A chat message (command) from the engineer triggers the workflow. The EC2 Manager AI Agent interprets the request using the AI chat model and memory. The agent decides which AWS EC2 action to perform: DescribeInstances → List or check status of EC2 instances. StartInstances → Boot up stopped instances. StopInstances → Gracefully shut down running instances. RebootInstances → Restart instances without stopping them. TerminateInstances → Permanently delete instances. The selected tool (API call) is executed via an HTTP Request to the AWS EC2 endpoint. The agent replies back in chat with the result (confirmation, instance status, errors, etc.). How to set up Add Chat Trigger Connect your chatbot platform (Slack/Telegram/Teams) to n8n. Configure the “When chat message received” node. Configure OpenAI Chat Model Select a supported LLM (GPT-4, GPT-4.1, GPT-5, etc.). Add system and user prompts to define behavior (EC2 assistant role). Add Memory Use Simple Memory to keep track of context (e.g., instance IDs, region, last action). Connect EC2 API Tools Create HTTP Request nodes for: Describe Instances Start Instance Stop Instance Reboot Instance Terminate Instance Use AWS credentials with Signature V4 authentication. API endpoint: https://ec2.{region}.amazonaws.com/ Link Tools to Agent Attach all EC2 tools to the EC2 Manager AI Agent node. Ensure the agent can choose which tool to call based on user input. Requirements n8n instance** (self-hosted or cloud). Chat platform integration** (Slack, Teams, or Telegram). OpenAI (or other LLM) credentials**. AWS IAM user with EC2 permissions**: ec2:DescribeInstances ec2:StartInstances ec2:StopInstances ec2:RebootInstances ec2:TerminateInstances AWS region configured** for API calls. How to customize the workflow Add safety checks**: Require explicit confirmation before running Stop or Terminate. Region flexibility**: Add support for multi-region management by letting the user specify the region in chat. Tag-based filters**: Extend DescribeInstances to return only instances matching specific tags (e.g., env=dev). Cost-saving automation**: Add scheduled rules to automatically stop instances outside working hours. Enhanced chatbot UX**: Format responses into tables or rich messages in Slack/Teams. Audit logging**: Store each action (who/what/when) into a database or Google Sheets for compliance.
by LeeWei
⚙️ Proposal Generator Template (Automates proposal creation from JotForm submissions) 🧑💻 Author: [LeeWei] 🚀 Steps to Connect: JotForm Setup Visit JotForm to generate your API key and connect to the JotForm Trigger node. Update the form field in the JotForm Trigger node with your form ID (default: 251206359432049). Google Drive Setup Go to Google Drive and set up OAuth2 credentials ("Google Drive account") with access to the folder containing your template. Update the fileId field in the Google Drive node with your template file ID (default: 1DSHUhq_DoM80cM7LZ5iZs6UGoFb3ZHsLpU3mZDuQwuQ). Update the name field in the Google Drive node with your desired output file name pattern (default: ={{ $json['Company Name'] }} | Ai Proposal). OpenAI Setup Visit OpenAI and generate your API key. Paste this key into the OpenAI and OpenAI1 nodes under the "OpenAi account 3" credentials. Update the modelId field in the OpenAI1 node if needed (default: gpt-4.1-mini). Google Docs Setup Set up OAuth2 credentials ("Google Docs account") with edit permissions for the generated documents. No fields need editing as the node dynamically updates based on previous outputs. Google Drive2 Setup Ensure the same Google Drive credentials ("Google Drive account") are used. No fields need editing as the node handles PDF conversion automatically. Gmail Setup Go to Gmail and set up OAuth2 credentials ("Gmail account"). No fields need editing as the node dynamically uses the prospect's email from JotForm. How it works The workflow triggers on JotForm submissions, copies a Google Drive template, downloads an audio call link, transcribes it with OpenAI, generates a tailored proposal, updates a Google Docs file, converts it to PDF, and emails it to the prospect. Set up steps Setup time: Approximately 15-20 minutes. Detailed instructions are available in sticky notes within the workflow.
by David Olusola
🧹 Auto-Clean CSV Uploads Before Import This workflow automatically cleans, validates, and standardizes any CSV file you upload. Perfect for preparing customer lists, sales leads, product catalogs, or any messy datasets before pushing them into Google Sheets, Google Drive, or other systems. ⚙️ How It Works CSV Upload (Webhook) Upload your CSV via webhook (supports form-data, base64, or binary file upload). Handles files up to ~10MB comfortably. Extract & Parse Reads raw CSV content. Validates file structure and headers. Detects and normalizes column names (e.g. First Name → first_name). Clean & Standardize Data Removes duplicate rows (based on email or all fields). Deletes empty rows. Standardizes fields: Emails → lowercased, validated format. Phone numbers → normalized (xxx) xxx-xxxx or +1 format. Names → capitalized (John Smith). Text → trims spaces & fixes inconsistent spacing. Assigns each row a data quality score so you know how “clean” it is. Generate Cleaned CSV Produces a cleaned CSV file with the same headers. Saves to Google Drive (optional). Ready for immediate import into Sheets or any app. Google Sheets Integration (Optional) Clears out an existing sheet. Re-imports the cleaned rows. Perfect for always keeping your “master sheet” clean. Final Report Logs processing summary: Rows before & after cleaning. Duplicates removed. Low-quality rows removed. Average data quality score. Outputs a neat summary for auditing. 🛠️ Setup Steps Upload Method Use the webhook endpoint generated by the CSV Upload Webhook node. Send CSV via binary upload, base64 encoding, or JSON payload with csv_content. Google Drive (Optional) Connect your Drive OAuth credentials. Replace YOUR_DRIVE_FOLDER_ID with your target folder. Google Sheets (Optional) Connect Google Sheets OAuth. Replace YOUR_GOOGLE_SHEET_ID with your target sheet ID. Customize Cleaning Rules Adjust the Clean & Standardize Data code node if you want different cleaning thresholds (default = 30% minimum data quality). 📊 Example Cleaning Report Input file: raw_leads.csv Rows before: 2,450 Rows after cleaning: 1,982 Duplicates removed: 210 Low-quality rows removed: 258 Avg. data quality: 87% ✅ Clean CSV saved to Drive ✅ Clean data imported into Google Sheets ✅ Full processing report generated 🎯 Why Use This? Stop wasting time manually cleaning CSVs. Ensure high-quality, import-ready data every time. Works with any dataset: leads, contacts, e-commerce exports, logs, surveys. Completely free — a must-have utility in your automation toolbox. ⚡ Upload dirty CSV → Get clean, validated, standardized data instantly!
by Rohit Dabra
Shopify MCP AI Agent Workflow for n8n Overview This n8n workflow showcases a full-featured AI-powered assistant connected to a Shopify store through a custom MCP (Multi-Channel Commerce Platform) Server toolkit. It empowers users to automate comprehensive Shopify store management by leveraging AI to interact conversationally with their data and operations. The workflow can create, fetch, search, update, and delete Shopify products and orders, all triggered via simple chat messages, making day-to-day store operations frictionless and highly efficient. Core capabilities include: Product and order management (CRUD) via chat commands. Smart retrieval: AI proactively fetches details instead of asking repeated questions. Contextual memory: AI uses n8n memory to provide context-aware, fluent responses. End-to-end automation: Connects Shopify, OpenAI, and n8n’s automation logic for seamless workflows. This solution is ideal for Shopify merchants, agencies, and developers aiming to reduce manual overhead and enable conversational, AI-powered commerce automation in their operations. 🎬 Watch Demo Video on YouTube Step-by-Step Setup Guide Follow these steps to import and configure the Shopify MCP AI Agent workflow in n8n: 1. Import the Workflow File Download the workflow file from this Creator Hub listing. In your n8n instance, go to Workflows > Import from File and upload the JSON. 2. Prepare Shopify Access Log in to your Shopify admin. Create a Custom App or use an existing app and retrieve the Admin API Access Token. Storefront access: Ensure your app has relevant permissions for Products, Orders, Discounts, and Store Settings. 3. Set Up Credentials in n8n In n8n, navigate to Credentials and add a new Shopify API credential using your Access Token. Name it something memorable (e.g., Shopify Access Token account) to match the credential used in the workflow nodes. 4. Configure the MCP Server Connection Make sure your MCP Server is running and accessible with API endpoints for product/order management. Update any relevant connection endpoints in the workflow if you run your MCP Server locally or in a different location. 5. Connect OpenAI or Other LLM Provider Provide your API key for OpenAI GPT or a compatible model. Link the credential to the OpenAI Chat Model node (replace with other providers if required). 6. (Optional) Customize for Your Needs Tweak node logic, add new triggers, or extend memory features as required. Add, remove, or restrain the AI’s capabilities to fit your operational needs. Configure chat triggers for more personalized workflows. 7. Testing Use the “When chat message received” trigger or send http requests to the workflow’s endpoint. Example: “Create an order for Jane Doe, 3 Black T-shirts” or “Show today’s fulfilled orders”. The workflow and AI Agent will handle context, fetch/store data, and reply accordingly. 8. Ready to Automate! Begin leveraging conversational automation to manage your Shopify store. For additional tips, consult the workflow’s internal documentation and n8n’s official guides. Additional Notes This template includes all core Shopify product and order operations. The AI agent auto-resolves context, making routine admin tasks simple and quick. Extend or fork the workflow to suit niche scenarios—discounts, analytics, and more. Visual thumbnail and schematic are included for easy reference.
by Alexandra Spalato
Short Description This LinkedIn automation workflow monitors post comments for specific trigger words and automatically sends direct messages with lead magnets to engaged users. The system checks connection status, handles non-connected users with connection requests, and prevents duplicate outreach by tracking all interactions in a database. Key Features Comment Monitoring**: Scans LinkedIn post comments for customizable trigger words Connection Status Check**: Determines if users are 1st-degree connections Automated DMs**: Sends personalized messages with lead magnet links to connected users Connection Requests**: Asks non-connected users to connect via comment replies Duplicate Prevention**: Tracks interactions in NocoDB to avoid repeat messages Message Rotation**: Uses different comment reply variations for authenticity Batch Processing**: Handles multiple comments with built-in delays Who This Workflow Is For Content creators looking to convert post engagement into leads Coaches and consultants sharing valuable LinkedIn content Anyone wanting to automate lead capture from LinkedIn posts How It Works Setup: Configure post ID, trigger word, and lead magnet link via form Comment Extraction: Retrieves all comments from the specified post using Unipile Trigger Detection: Filters comments containing the specified trigger word Connection Check: Determines if commenters are 1st-degree connections Smart Routing: Connected users receive DMs, others get connection requests Database Logging: Records all interactions to prevent duplicates Setup Requirements Required Credentials Unipile API Key**: For LinkedIn API access NocoDB API Token**: For database tracking Database Structure **Table: leads linkedin_id: LinkedIn user ID name: User's full name headline: LinkedIn headline url: Profile URL date: Interaction date posts_id: Post reference connection_status: Network distance dm_status: Interaction type (sent/connection request) Customization Options Message Templates**: Modify DM and connection request messages Trigger Words**: Change the words that activate the workflow Timing**: Adjust delays between messages (8-12 seconds default) Reply Variations**: Add more comment reply options for authenticity Installation Instructions Import the workflow into your n8n instance Set up NocoDB database with required table structure Configure Unipile and NocoDB credentials Set environment variables for Unipile root URL and LinkedIn account ID Test with a sample post before full use
by JinPark
🧩 Summary Easily digitize and organize your business cards! This workflow allows you to upload a business card image, automatically extract contact information using Google Gemini’s OCR & vision model, and save the structured data into a Notion database — no manual typing required. Perfect for teams or individuals who want to centralize client contact info in Notion after networking events or meetings. ⚙️ How it works Form Submission Upload a business card image (.jpg, .png, or .jpeg) through an n8n form. Optionally select a category (e.g., Partner, Client, Vendor). AI-Powered OCR (Google Gemini) The uploaded image is sent to Google Gemini Vision for intelligent text recognition and entity extraction. Gemini returns structured text data such as: { "Name": "Jung Hyun Park", "Position": "Head of Development", "Phone": "021231234", "Mobile": "0101231234", "Email": "abc@dc.com", "Company": "TOV", "Address": "6F, Donga Building, 212, Yeoksam-ro, Gangnam-gu, Seoul", "Website": "www.tov.com" } JSON Parsing & Cleanup The text response from Gemini is cleaned and parsed into a valid JSON object using a Code node. Save to Notion The parsed data is automatically inserted into your Notion database (Customer Business Cards). Fields such as Name, Email, Phone, Address, and Company are mapped to Notion properties. 🧠 Used Nodes Form Trigger** – Captures uploaded business card and category input Google Gemini (Vision)** – Extracts contact details from the image Code** – Parses Gemini’s output into structured JSON Notion** – Saves extracted contact info to your Notion database 📦 Integrations | Service | Purpose | Node Type | |----------|----------|-----------| | Google Gemini (PaLM) | Image-to-text extraction (OCR + structured entity parsing) | @n8n/n8n-nodes-langchain.googleGemini | | Notion | Contact data storage | n8n-nodes-base.notion | 🧰 Requirements A connected Google Gemini (PaLM) API credential A Notion integration with edit access to your database 🚀 Example Use Cases Digitize stacks of collected business cards after a conference Auto-save new partner contacts to your CRM database in Notion Build a searchable Notion-based contact directory Combine with Notion filters or rollups to manage client relationships 💡 Tips You can easily extend this workflow by adding an email notification node to confirm successful uploads. For multilingual cards, Gemini Vision handles mixed-language text recognition well. Adjust Gemini model (gemini-1.5-flash or gemini-1.5-pro) based on your accuracy vs. speed needs. 🧾 Template Metadata | Field | Value | |-------|--------| | Category | AI + Notion + OCR | | Difficulty | Beginner–Intermediate | | Trigger Type | Form Submission | | Use Case | Automate business card digitization | | Works with | Google Gemini, Notion |
by Dhinesh Ravikumar
Who it's for Project managers, AI builders, and teams who want structured, automated meeting summaries with zero manual work. What it does This workflow monitors a Google Drive folder for new meeting notes (PDF/TXT), extracts text, summarizes it via OpenAI GPT-4o, groups tasks by sentiment, builds a styled HTML summary, and sends it via Gmail. How to set it up Connect Google Drive, OpenAI, and Gmail credentials. Point the Drive Trigger to your meeting notes folder. Paste the system prompt into the AI node. Set Gmail Email Type to HTML and Message to {{$json.email_html}}. Drop a test file and execute once. Requirements n8n account Google Drive, OpenAI, and Gmail credentials Non-scanned PDFs or plain text files Customization ideas Add Slack or Notion logging Support additional file types Translate summaries automatically Tags #ai #automation #productivity #gmail #drive #meeting-summary #openai
by Nalin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Complete account-based outreach automation with Octave context engine Who is this for? Revenue teams, account-based marketing professionals, and growth operators who want a complete, automated pipeline from account identification to contextualized outreach. Built for teams ready to move beyond fragmented point solutions to an integrated, context-aware GTM engine. What problem does this solve? Most GTM teams are flying blind with disconnected tools that can't talk to each other. You qualify accounts in one system, find contacts in another, research context manually, then hope your email sequences land. Each step loses context, and by the time you're writing outreach, you've forgotten why the account was qualified in the first place. Octave centralizes all this typically fragmented context - your ICP definitions, personas, value propositions, and business logic - so every agent operation can act on the same unified understanding of your market. This workflow demonstrates how Octave's agents work together seamlessly because they all share the same context foundation. What this workflow does Complete Account-to-Outreach Pipeline: This workflow demonstrates the full power of Octave's context engine by connecting multiple agent operations in a seamless flow. Unlike traditional tools that lose context at each handoff, Octave centralizes your business context - ICP definitions, personas, value propositions, competitive positioning - so every agent operates from the same unified understanding of your market. External Context Research: Gathers real-time external data about target accounts (job postings, news, funding, etc.) Processes this information to create runtime context for later use in outreach Establishes the "why reach out now" foundation for the entire workflow Company-Level Qualification: Uses Octave's company qualification to assess account fit against your specific offering Leverages Product and Segment-level fit criteria defined in your Library Filters out accounts that don't meet your qualification thresholds Ensures only high-potential accounts proceed through the workflow Intelligent Contact Discovery: Runs Octave's prospector agent on qualified accounts Finds relevant stakeholders based on responsibilities and business context, not just job titles Discovers multiple contacts per account for comprehensive coverage Maintains qualification context when identifying the right people Runtime Context Integration: Takes the external context gathered at the beginning and injects it into sequence generation Creates truly dynamic, timely outreach that references current company events Generates sequences that feel impossibly relevant and well-researched Multi-Contact Sequence Generation: Splits discovered contacts into individual records for processing Generates contextualized email sequences for each contact Maintains account-level context while creating contact-specific messaging Produces sequences (1-7 emails) that feel unmistakably meant for each person Automated Campaign Deployment: Automatically adds all qualified contacts with their contextualized sequences to email campaigns Maps dynamic content to campaign variables for seamless execution Maintains the context chain from qualification through delivery Setup Required Credentials: Octave API key and workspace access External data source API (job boards, news APIs, enrichment services, etc.) Email platform API key (Instantly.ai configured, easily adaptable) Optional: LLM credentials if using the example external research agent Step-by-Step Configuration: Set up Account Input Source: Replace your-webhook-path-here with a unique, secure path Configure your account source (CRM, website visitors, target lists) to send company data Ensure account data includes company name and domain for processing Configure External Context Research: Replace the example AI agent with your preferred external data source Set up connections to job boards, news APIs, or enrichment services Configure context gathering to find timely, relevant information about target accounts Set up Company Qualification Agent: Add your Octave API credentials Replace your-octave-company-qualification-agent-id with your actual agent ID Configure qualification criteria at Product and Segment levels in your Octave Library Configure Prospector Agent: Replace your-octave-prospector-agent-id with your actual prospector agent ID Define target personas and stakeholder types in your Octave Library Set contact discovery parameters for optimal coverage Set up Sequence Agent: Replace your-octave-sequence-agent-id with your actual sequence agent ID Configure runtime context integration for dynamic content Test sequence quality with the external context integration Configure Email Campaign Platform: Add your email platform API credentials Replace your-campaign-id-here with your actual campaign ID Ensure campaign supports custom variables for dynamic content Required Webhook Payload Format: { "body": { "companyName": "InnovateTech Solutions", "companyDomain": "innovatetech.com" } } How to customize External Context Sources: Replace the example research with your data sources: Job Board APIs:** Reference current hiring and team expansion News APIs:** Mention funding, product launches, or market expansion Enrichment Services:** Pull technology adoption, market changes, or competitive moves Social Monitoring:** Reference recent company posts or industry discussions Company Qualification: Configure qualification in your Octave company qualification agent: Product Level:** Define "good fit" and "bad fit" questions for your core offering Segment Level:** Set criteria for different market segments or use cases Qualification Thresholds:** Adjust the filter score based on your standards Contact Discovery: Customize prospecting in your Octave prospector agent: Target Personas:** Define which Library personas to prioritize Organizational Levels:** Focus on specific seniority levels or decision-making authority Contact Volume:** Adjust how many contacts to discover per qualified account Runtime Context Integration: Configure dynamic content injection: Context Definition:** Specify what external data represents in your sequences Usage Instructions:** Define how to incorporate context into messaging Email-Level Control:** Apply different context to different emails in sequences Sequence Generation: Customize email creation: Core Context (Library):** Define personas, use cases, and offering definitions Strategy (Playbooks):** Configure messaging frameworks and value propositions Writing Style (Agent):** Adjust tone, voice, and communication approach Campaign Integration: Adapt for different email platforms: Update API endpoints and authentication for your preferred platform Modify variable mapping for platform-specific requirements Adjust sequence formatting and length based on platform capabilities Use Cases Complete inbound lead processing from website visitor to qualified outreach Event-triggered account processing for funding announcements or hiring spikes Competitive displacement campaigns with account qualification and contact discovery Market expansion automation for entering new territories or segments Product launch outreach with context-aware targeting and messaging Customer expansion workflows for upselling within existing account bases
by Nalin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Generate dynamic email sequences with runtime context and external data Who is this for? Growth teams, sales development reps, and outbound marketers who want to reference specific, real-time information about prospects in their email sequences. Built for teams that have access to external data sources and want to create truly contextualized outreach that feels impossibly relevant. What problem does this solve? Most outbound sequences are static - they use the same messaging for everyone regardless of what's actually happening at the prospect's company right now. You might know they're hiring, launched a product, got funding, or expanded to new markets, but your email sequences can't dynamically reference these timely events. This workflow shows how to inject real-time external context into Octave's sequence generation, creating outreach that feels like you're personally monitoring each prospect's company. What this workflow does Lead Data & Context Collection: Receives lead information via webhook (firstName, companyName, companyDomain, profileURL, jobTitle) Uses external data sources to gather timely context about the prospect's company Example: AI agent researches current job postings to find roles they're actively hiring for Processes this context into structured data for sequence generation Runtime Context Integration: Feeds external context into Octave's sequence generation as "runtime context" Defines both WHAT the context is ("they are hiring a software engineer") and HOW to use it ("mention the role in the opening") Allows Octave to weave timely, relevant details into each email naturally Creates sequences that feel like personal research rather than mass outreach Dynamic Sequence Generation: Leverages Octave's context engine plus runtime data to create hyper-relevant sequences (1-7 emails) Generates subject lines and email content that reference specific, current company context Maintains your positioning and value prop while incorporating timely relevance Creates messaging that feels unmistakably meant for that specific moment in the prospect's business Campaign Integration: Automatically adds leads with contextualized sequences to your email platform Maps generated content to campaign variables for automated sending Supports multiple email platforms with easy customization Setup Required Credentials: Octave API key and workspace access External data source API (job boards, news APIs, enrichment services, etc.) Email platform API key (Instantly.ai configured, easily adaptable) Optional: LLM credentials if using the example AI agent for testing Step-by-Step Configuration: Set up External Data Source: Replace the AI Agent with your preferred data source (job board APIs, news APIs, company databases) Configure data collection to find relevant, timely information about prospects Structure the output to provide clean context for sequence generation Set up Octave Sequence Agent: Add your Octave API credentials in n8n Replace your-octave-sequence-agent-id with your actual sequence agent ID Configure runtime context parameters: Runtime Context: Define WHAT the external data represents Runtime Instructions: Specify HOW to use it in the messaging Configure Email Platform: Add your email platform API credentials Replace your-campaign-id-here with your actual campaign ID Ensure campaign supports custom variables for dynamic content Set up Lead Source: Replace your-webhook-path-here with a unique, secure path Configure your lead source to send prospect data to the webhook Test end-to-end flow with sample leads Required Webhook Payload Format: { "body": { "firstName": "Alex", "lastName": "Chen", "companyName": "InnovateTech", "companyDomain": "innovatetech.com", "profileURL": "https://linkedin.com/in/alexchen", "email": "alex@innovatetech.com", "jobTitle": "VP of Engineering" } } How to customize External Data Sources: Replace the AI agent with your preferred context collection method: Job Board APIs:** Reference current hiring needs and team expansion News APIs:** Mention recent company announcements, funding, or product launches Social Media Monitoring:** Reference recent LinkedIn posts, company updates, or industry discussions Enrichment Services:** Pull real-time company data, technology stack changes, or market expansion Runtime Context Configuration: Customize how external data integrates with sequences: Context Definition:** Specify what the external data represents ("they just raised Series B funding") Usage Instructions:** Define how to incorporate it ("mention the funding in the opening and tie it to growth challenges") Email-Level Control:** Configure different context usage for different emails in the sequence Global vs. Specific:** Apply context to all emails or target specific messages Data Processing: Replace the example AI agent with your external data processing: Modify data source connections to pull relevant context Ensure consistent output formatting for runtime context integration Add error handling for cases where external data isn't available Implement fallback context for prospects without relevant external data Sequence Customization: Configure Octave sequence generation: Core Context (Library):** Define your personas, use cases, and offering definitions Strategy (Playbooks):** Configure messaging frameworks and value proposition delivery Writing Style (Agent):** Adjust tone, voice, and communication style Email Platform Integration: Adapt for different email sequencing platforms: Update API endpoints and authentication for your preferred platform Modify variable mapping for platform-specific custom fields Adjust sequence length and formatting requirements Use Cases Job posting-triggered outreach for hiring managers and HR teams Funding announcement follow-ups for growth-stage companies Product launch congratulations with relevant use case discussions Market expansion outreach when companies enter new territories Technology adoption sequences based on recent stack additions Event attendance follow-ups with session-specific references
by Growth AI
Google Ads automated reporting to spreadsheets with Airtable Who's it for Digital marketing agencies, PPC managers, and marketing teams who manage multiple Google Ads accounts and need automated monthly performance reporting organized by campaign types and conversion metrics. What it does This workflow automatically retrieves Google Ads performance data from multiple client accounts and populates organized spreadsheets with campaign metrics. It differentiates between e-commerce (conversion value) and lead generation (conversion count) campaigns, then organizes data by advertising channel (Performance Max, Search, Display, etc.) with monthly tracking for budget and performance analysis. How it works The workflow follows an automated data collection and reporting process: Account Retrieval: Fetches client information from Airtable (project names, Google Ads IDs, campaign types) Active Filter: Processes only accounts marked as "Actif" for budget reporting Campaign Classification: Routes accounts through e-commerce or lead generation workflows based on "Typologie ADS" Google Ads Queries: Executes different API calls depending on campaign type (conversion value vs. conversion count) Data Processing: Organizes metrics by advertising channel (Performance Max, Search, Display, Video, Shopping, Demand Gen) Dynamic Spreadsheet Updates: Automatically fills the correct monthly column in client spreadsheets Sequential Processing: Handles multiple accounts with wait periods to avoid API rate limits Requirements Airtable account with client database Google Ads API access with developer token Google Sheets API access Client-specific spreadsheet templates (provided) How to set up Step 1: Prepare your reporting template Copy the Google Sheets reporting template Create individual copies for each client Ensure proper column structure (months B-M for January-December) Link template URLs in your Airtable database Step 2: Configure your Airtable database Set up the following fields in your Airtable: Project names: Client project identifiers ID GADS: Google Ads customer IDs Typologie ADS: Campaign classification ("Ecommerce" or "Lead") Status - Prévisionnel budgétaire: Account status ("Actif" for active accounts) Automation budget: URLs to client-specific reporting spreadsheets Step 3: Set up API credentials Configure the following authentication: Airtable Personal Access Token: For client database access Google Ads OAuth2: For advertising data retrieval Google Sheets OAuth2: For spreadsheet updates Developer Token: Required for Google Ads API access Login Customer ID: Manager account identifier Step 4: Configure Google Ads API settings Update the HTTP request nodes with your credentials: Developer Token: Replace "[Your token]" with your actual developer token Login Customer ID: Replace "[Your customer id]" with your manager account ID API Version: Currently using v18 (update as needed) Step 5: Set up scheduling Default schedule: Runs on the 3rd of each month at 5 AM Cron expression: 0 5 3 * * Recommended timing: Early month execution for complete previous month data Processing delay: 1-minute waits between accounts to respect API limits How to customize the workflow Campaign type customization E-commerce campaigns: Tracks: Cost and conversion value metrics Query: metrics.conversions_value for revenue tracking Use case: Online stores, retail businesses Lead generation campaigns: Tracks: Cost and conversion count metrics Query: metrics.conversions for lead quantity Use case: Service businesses, B2B companies Advertising channel expansion Current channels tracked: Performance Max: Automated campaign type Search: Text ads on search results Display: Visual ads on partner sites Video: YouTube and video partner ads Shopping: Product listing ads Demand Gen: Audience-focused campaigns Add new channels by modifying the data processing code nodes. Reporting period adjustment Current setting: Last month data (DURING LAST_MONTH) Alternative periods: Last 30 days, specific date ranges, quarterly reports Custom timeframes: Modify the Google Ads query date parameters Multi-account management Sequential processing: Handles multiple accounts automatically Error handling: Continues processing if individual accounts fail Rate limiting: Built-in waits prevent API quota issues Batch size: No limit on number of accounts processed Data organization features Dynamic monthly columns Automatic detection: Determines previous month column (B-M) Column mapping: January=B, February=C, ..., December=M Data placement: Updates correct month automatically Multi-year support: Handles year transitions seamlessly Campaign performance breakdown Each account populates 10 rows of data: Performance Max Cost (Row 2) Performance Max Conversions/Value (Row 3) Demand Gen Cost (Row 4) Demand Gen Conversions/Value (Row 5) Search Cost (Row 6) Search Conversions/Value (Row 7) Video Cost (Row 8) Video Conversions/Value (Row 9) Shopping Cost (Row 10) Shopping Conversions/Value (Row 11) Data processing logic Cost conversion: Automatically converts micros to euros (÷1,000,000) Precision rounding: Rounds to 2 decimal places for clean presentation Zero handling: Shows 0 for campaign types with no activity Data validation: Handles missing or null values gracefully Results interpretation Monthly performance tracking Historical data: Year-over-year comparison across all channels Channel performance: Identify best-performing advertising types Budget allocation: Data-driven decisions for campaign investments Trend analysis: Month-over-month growth or decline patterns Account-level insights Multi-client view: Consolidated reporting across all managed accounts Campaign diversity: Understanding which channels clients use most Performance benchmarks: Compare similar account types and industries Resource allocation: Focus on high-performing accounts and channels Use cases Agency reporting automation Client dashboards: Automated population of monthly performance reports Budget planning: Historical data for next month's budget recommendations Performance reviews: Ready-to-present data for client meetings Trend identification: Spot patterns across multiple client accounts Internal performance tracking Team productivity: Track account management efficiency Campaign optimization: Identify underperforming channels for improvement Growth analysis: Monitor client account growth and expansion Forecasting: Use historical data for future performance predictions Strategic planning Budget allocation: Data-driven distribution across advertising channels Channel strategy: Determine which campaign types to emphasize Client retention: Proactive identification of declining accounts New business: Performance data to support proposals and pitches Workflow limitations Monthly execution: Designed for monthly reporting (not real-time) API dependencies: Requires stable Google Ads and Sheets API access Rate limiting: Sequential processing prevents parallel account handling Template dependency: Requires specific spreadsheet structure for proper data placement Previous month focus: Optimized for completed month data (run early in new month) Manual credential setup: Requires individual configuration of API tokens and customer IDs
by Nansen
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This workflow listens for an incoming chat message and routes it to an AI Agent. The agent is powered by your preferred Chat Model (such as OpenAI or Anthropic) and extended with the Nansen MCP tool, which enables it to retrieve onchain wallet data, token movements, and address-level insights in real time. The Nansen MCP tool uses HTTP Streamable transport and requires API Key authentication via Header Auth. Read the Documentation: https://docs.nansen.ai/nansen-mcp/overview Set up steps Get your Nansen MCP API key Visit: https://app.nansen.ai/account?tab=api Generate and copy your personal API key. Create a credential for authentication From the homepage, click the dropdown next to "Create Workflow" → "Create Credential". Select Header Auth as the method. Set the Header Name to: NANSEN-API-KEY Paste your API key into the Value field. Save the credential (e.g., Nansen MCP Credentials). Configure the Nansen MCP tool Endpoint: https://mcp.nansen.ai/ra/mcp/ Server Transport: HTTP Streamable Authentication: Header Auth Credential: Select Nansen MCP Credentials Tools to Include: Leave as All (or restrict as needed) Configure the AI Agent Connect your preferred Chat Model (e.g., OpenAI, Anthropic) to the Chat Model input. Connect the Nansen MCP tool to the Tool input. (Optional) Add a Memory block to preserve conversational context. Set up the chat trigger Use the "When chat message received" node to start the flow when a message is received. Test your setup Try sending prompts like: What tokens are being swapped by 0xabc...123? Get recent wallet activity for this address. Show top holders of token XYZ.