by Browser Use
A sample demo showing how to integrate Browser Use Cloud API with N8N workflows. This template demonstrates AI-powered web research automation by collecting competitor intelligence and delivering formatted results to Slack. How It Works Form trigger accepts competitor name input Browser Use Cloud API performs automated web research Webhook processes completion status and retrieves structured data JavaScript code formats results into readable Slack message HTTP request sends final report to Slack Integration Pattern This workflow showcases key cloud API integration techniques: REST API authentication with bearer tokens Webhook-based status monitoring for long-running tasks JSON data parsing and transformation Conditional logic for processing different response states Setup Required Browser Use API key (signup at cloud.browser-use.com) Slack webhook URL Perfect demo for learning browser-use cloud API integrations and building automated research workflows.
by Dart
Automatically generate a meeting summary from your meetings through Fathom, save it to a Dart document, and create a review task with the Fathom link attached. What it does This workflow activates when a Fathom meeting ends (via a webhook). It uses an AI model to generate a structured summary of the meeting. The workflow then: Who’s it for Teams or individuals needing automatic meeting notes. Project managers tracking reviews and actions from meetings. Users of Fathom and Dart who want to streamline their documentation and follow-up process. How to set up Import the workflow into n8n. Connect your Dart account (it will need workspace and folder access). Add your PROD webhook link from the webhook node to your Fathom API settings. Replace the dummy Folder ID and Dartboard ID with your actual target IDs. Choose your preferred AI model for generating the summaries. Requirements n8n account Connected Dart account Connected Fathom account (with access to API webhooks) How to customize the workflow Edit the AI prompt to adjust the tone, style, or format of the meeting summaries. Add, remove, or change the summary sections to match your needs (e.g., Key Takeaways, Action Items, Next Items).
by vinci-king-01
Smart Blockchain Monitor with ScrapeGraphAI Risk Detection and Instant Alerts 🎯 Target Audience Cryptocurrency traders and investors DeFi protocol managers and developers Blockchain security analysts Financial compliance officers Crypto fund managers and institutions Risk management teams Blockchain developers monitoring smart contracts Digital asset custodians 🚀 Problem Statement Manual blockchain monitoring is time-consuming and prone to missing critical events, often leading to delayed responses to high-value transactions, security threats, or unusual network activity. This template solves the challenge of real-time blockchain surveillance by automatically detecting, analyzing, and alerting on significant blockchain events using AI-powered intelligence and instant notifications. 🔧 How it Works This workflow automatically monitors blockchain activity in real-time, uses ScrapeGraphAI to intelligently extract transaction data from explorer pages, performs sophisticated risk analysis, and instantly alerts your team about significant events across multiple blockchains. Key Components Blockchain Webhook - Real-time trigger that activates when new blocks are detected Data Normalizer - Standardizes blockchain data across different networks ScrapeGraphAI Extractor - AI-powered transaction data extraction from blockchain explorers Risk Analyzer - Advanced risk scoring based on transaction patterns and values Smart Filter - Intelligently routes only significant events for alerts Slack Alert System - Instant formatted notifications to your team 📊 Risk Analysis Specifications The template performs comprehensive risk analysis with the following parameters: | Risk Factor | Threshold | Score Impact | Description | |-------------|-----------|--------------|-------------| | High-Value Transactions | >$10,000 USD | +15 per transaction | Individual transactions exceeding threshold | | Block Volume | >$1M USD | +20 points | Total block transaction volume | | Block Volume | >$100K USD | +10 points | Moderate block transaction volume | | Failure Rate | >10% | +15 points | Percentage of failed transactions in block | | Multiple High-Value | >3 transactions | Alert trigger | Multiple large transactions in single block | | Critical Failure Rate | >20% | Alert trigger | Extremely high failure rate indicator | Risk Levels: High Risk**: Score ≥ 50 (Immediate alerts) Medium Risk**: Score ≥ 25 (Standard alerts) Low Risk**: Score < 25 (No alerts) 🌐 Supported Blockchains | Blockchain | Explorer | Native Support | Transaction Detection | |------------|----------|----------------|----------------------| | Ethereum | Etherscan | ✅ Full | High-value, DeFi, NFT | | Bitcoin | Blockchair | ✅ Full | Large transfers, institutional | | Binance Smart Chain | BscScan | ✅ Full | DeFi, high-frequency trading | | Polygon | PolygonScan | ✅ Full | Layer 2 activity monitoring | 🛠️ Setup Instructions Estimated setup time: 15-20 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Slack workspace with webhook or bot token Blockchain data source (Moralis, Alchemy, or direct node access) Basic understanding of blockchain explorers Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure proper functionality 3. Set up Slack Integration Add Slack OAuth2 or webhook credentials Configure your target channel for blockchain alerts Test message delivery to ensure notifications work Customize alert formatting preferences 4. Configure Blockchain Webhook Set up the webhook endpoint for blockchain data Configure your blockchain data provider (Moralis, Alchemy, etc.) Ensure webhook payload includes block number and blockchain identifier Test webhook connectivity with sample data 5. Customize Risk Parameters Adjust high-value transaction threshold (default: $10,000) Modify risk scoring weights based on your needs Configure blockchain-specific risk factors Set failure rate thresholds for your use case 6. Test and Validate Send test blockchain data to trigger the workflow Verify ScrapeGraphAI extraction accuracy Check risk scoring calculations Confirm Slack alerts are properly formatted and delivered 🔄 Workflow Customization Options Modify Risk Analysis Adjust high-value transaction thresholds per blockchain Add custom risk factors (contract interactions, specific addresses) Implement whitelist/blacklist address filtering Configure time-based risk adjustments Extend Blockchain Support Add support for additional blockchains (Solana, Cardano, etc.) Customize explorer URL patterns Implement chain-specific transaction analysis Add specialized DeFi protocol monitoring Enhance Alert System Add email notifications alongside Slack Implement severity-based alert routing Create custom alert templates Add alert escalation rules Advanced Analytics Add transaction pattern recognition Implement anomaly detection algorithms Create blockchain activity dashboards Add historical trend analysis 📈 Use Cases Crypto Trading**: Monitor large market movements and whale activity DeFi Security**: Track protocol interactions and unusual contract activity Compliance Monitoring**: Detect suspicious transaction patterns Institutional Custody**: Alert on high-value transfers and security events Smart Contract Monitoring**: Track contract interactions and state changes Market Intelligence**: Analyze blockchain activity for trading insights 🚨 Important Notes Respect ScrapeGraphAI API rate limits and terms of service Implement appropriate delays to avoid overwhelming blockchain explorers Keep your API credentials secure and rotate them regularly Monitor API usage to manage costs effectively Consider blockchain explorer rate limits for high-frequency monitoring Ensure compliance with relevant financial regulations Regularly update risk parameters based on market conditions 🔧 Troubleshooting Common Issues: ScrapeGraphAI extraction errors: Check API key and account status Webhook trigger failures: Verify webhook URL and payload format Slack notification failures: Check bot permissions and channel access False positive alerts: Adjust risk scoring thresholds Missing transaction data: Verify blockchain explorer accessibility Rate limit errors: Implement delays and monitor API usage Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Blockchain explorer API documentation Slack API documentation for advanced configurations Cryptocurrency compliance and regulatory guidelines
by 21CEL
How it works This workflow runs a spider job in the background via Scrapyd, using a YAML config that defines selectors and parsing rules. When triggered, it schedules the spider with parameters (query, project ID, page limits, etc.). The workflow polls Scrapyd until the job finishes. Once complete, it fetches the output items, enriches them (parse JSONL, deduplicate, extract ID/part number/make/model/part name, normalize price), sorts results, and returns structured JSON. Optional debug outputs such as logs, HTML dumps, and screenshots are also collected. How to use Use the manual trigger for testing, or replace it with webhook, schedule, or other triggers. Adjust the runtime parameters (q, project_id, pages, etc.) directly in the workflow when running. The background spider config (YAML and spider code) must be updated separately — this workflow only orchestrates and enriches results, it does not define scraping logic. Requirements Scrapyd service for job scheduling & status tracking A deployed spider with a valid YAML config (adjust selectors there) JSON Lines output (items.jl) from the spider Endpoints for optional artifacts (logs, HTML, screenshots) n8n with HTTP, Wait, Code, and Aggregate nodes enabled Customising this workflow Update the YAML config if the target website structure changes Modify the enrichment code to extract different fields (e.g., categories, ratings) Adjust deduplication (cheapest, newest, or other logic) Toggle debug retrieval depending on performance/storage needs Extend webhook response to integrate with databases, APIs, or downstream workflows
by Adrian
📘 Overview This workflow automates end-to-end social media publishing powered by Late API. It generates text content with Google Gemini, creates branded visuals with Kie.ai, uploads media to Late, and publishes across multiple platforms (Facebook, Instagram, LinkedIn, TikTok). It’s a production-ready automation for marketing teams who want to save hours of work by letting AI handle both copywriting and design — all inside n8n. ⚙️ How it works Generate text content → Google Gemini produces platform-optimized copy (tone & length adapted to each network). Generate visuals → Kie.ai Seedream v4 creates branded 1080x1080 images. Upload to Late → media is stored using Late’s upload API (small & large file handling). Publish → posts are created via Late API on enabled platforms with correct { platform, accountId } mapping. Notify → success logs are sent via Slack, Discord, Email, and Webhook. 🛠 Setup Steps Time to setup:** ~10–15 minutes Steps:** Add your API keys in n8n Credentials: Google Gemini API (PaLM) Kie.ai (Seedream) Late API Insert your Account IDs (Facebook, Instagram, LinkedIn, TikTok) into the Default Settings node. Choose which platforms to enable (ENABLE_FACEBOOK, ENABLE_INSTAGRAM, etc.). Set your Business Type and Content Topic (e.g., “a tech company” / “new product launch”). Execute the workflow. 📝 Notes Sticky Notes** are included in the workflow to guide each section: Overview, Prerequisites, Default Settings, Content Generation, Image Generation, Media Upload, Publishing Logic, Notifications, Error Handling. All API keys are handled via Credentials (no hardcoding). Fallback content is included in case Gemini fails to parse. Large image files (>4MB) are handled with Late’s multipart upload flow. 💸 Cost per Flow (Estimated) Late API**: $0.00 within Free/Unlimited plans, or ≈ $0.11/post on Build plan ($13/120 posts). Google Gemini**: ~$0.0001–$0.0004 per post (≈200 tokens in/out). Kie.ai (Seedream)**: ≈ $0.01–$0.02 per generated image. ➡️ Total: ~$0.01 – $0.12 per post, depending mainly on your Late plan & Kie.ai credits. 🎯 Use cases Marketing teams automating cross-platform campaigns. Solo founders posting content daily without design/copy effort. Agencies scaling social media management with AI + automation. 📢 Credits Built by Adrian (RoboMarketing) for the n8n Arena Challenge – September 2025. Powered by: Gemini API Kie.ai Seedream Late API
by Ruthwik
n8n Google Sheets Monthly Order Logger This n8n template records incoming e-commerce orders into Google Sheets, auto-creates a monthly sub-sheet, and adds a “Status” dropdown so your team can track fulfillment at a glance. Use cases Centralize order logs, coordinate shipping across months, trigger customer updates (e.g., WhatsApp/Email) from status changes, and build lightweight ops dashboards. Good to know The Google Sheet ID is the part in the URL between /d/ and the next slash: https://docs.google.com/spreadsheets/d/<sheetId>/. A new sub-sheet is created every month (sheet name = current month, e.g., “September 2025”). If it already exists, the workflow appends to it. The Status column uses data validation with these options: Not Shipped, Pickup Scheduled, Shipped, InTransit, Delivered, Cancelled. Make sure the Google credential in n8n has edit access to the spreadsheet. The Webhook URL must be updated in your Shopify Settings → Notifications → Webhooks page with the required Order events (e.g., Order creation, Order update, Order fulfillment). Reference: Shopify Webhooks Guide How it works Order created (Webhook/Trigger): Receives a new order payload from your store/stack. Config (set spreadsheetId): Stores the target Google Sheets spreadsheetId (copied from the URL). Get Order Sheets metadata: Lists existing tabs to see if the tab for the current month already exists. Generate Sheet Name: Computes the sheet name like {{ $now.format('MMMM YYYY') }}. If (sheet exists?): True → Google Sheets Row values (existing): Prepares the row for append using the month tab. Append to Existing Orders Sheet: Appends the order as a new row. False → Set Sheet Starting row/col: Sets starting cell (e.g., A1) for a brand-new month tab. Create Month Sheet: Creates a new tab named for the current month. Write Headers (A1:…): Writes the column headers. Google Sheets Row values: Maps payload fields into the header order and applies validation to Status. Append to Orders Sheet: Appends the first row into the newly created month tab. How to use In Config, paste your spreadsheetId from the sheet URL and confirm your Google credential has edit access. (Optional) Adjust the month-tab naming format to match your preference. In Shopify → Settings → Notifications → Webhooks, add your n8n webhook URL and select the Order events (Order creation, Order update, Order fulfillment, etc.) you want to capture. Deploy the workflow and send a sample order to the trigger; a new month tab will be created automatically on the first order of each month. Requirements n8n instance with the Google Sheets node credential configured. A Google Spreadsheet you own or can edit. A Shopify store with webhook events enabled (see Shopify Webhooks Guide). Customising this workflow Add/remove columns (e.g., taxes, discounts, warehouse notes). Change the Status list or add conditional formatting (e.g., green = Delivered). Chain automations: on Status update → send tracking links, COD confirmation, or delivery feedback forms.
by Hassan
AI-Powered Personalized Cold Email Icebreaker Generator Overview This intelligent automation system transforms generic cold outreach into highly personalized email campaigns by automatically scraping prospect websites, analyzing their content with AI, and generating unique, conversational icebreakers that reference specific, non-obvious details about each business. The workflow integrates seamlessly with Instantly.ai to deliver campaigns that achieve significantly higher response rates compared to traditional cold email approaches. The system processes leads from your n8n data table, validates contact information, scrapes multiple pages from each prospect's website, uses GPT-4.1 to synthesize insights, and crafts personalized openers that make recipients believe you've done deep research on their business—all without manual intervention. Key Benefits 🎯 Hyper-Personalization at Scale: Generate unique icebreakers for 30+ leads per execution that reference specific details about each prospect's business, creating the impression of manual research while automating 100% of the process. 💰 Dramatically Higher Response Rates: Personalized cold emails using this system typically achieve 4-5% response rates for campaigns, directly translating to more booked meetings and closed deals. ⏱️ Massive Time Savings: What would take 10-15 minutes of manual research per prospect (website review, note-taking, icebreaker writing) now happens in 30-45 seconds automatically, freeing your team to focus on conversations instead of research. 🧠 AI-Powered Intelligence: Dual GPT model approach uses GPT-4.1-mini for efficient content summarization and GPT-4.1 for creative icebreaker generation, ensuring both cost efficiency and high-quality output with a distinctive "spartan" tone that converts. 🔄 Built-In Error Handling: Comprehensive retry logic (5 attempts with 5-second delays) and graceful failure management ensure the workflow continues processing even when websites are down or inaccessible, automatically removing problem records from your queue. 🗃️ Clean Data Management: Automatically removes processed leads from your database after successful campaign addition, preventing duplicate outreach and maintaining organized lead lists for future campaigns. 📊 Batch Processing Control: Processes leads in configurable batches (default 30) to manage API costs and rate limits while maintaining efficiency, with easy scaling for larger lists. 🔌 Instantly.ai Integration: Direct API integration pushes leads with custom variables into your campaigns automatically, supporting skip_if_in_campaign logic to prevent duplicate additions and maintain clean campaign lists. How It Works Stage 1: Lead Acquisition & Validation The workflow begins with a manual trigger, allowing you to control when processing starts. It queries your n8n data table and retrieves up to 30 records filtered by Email_Status. The Limit node caps this at 30 items to control processing costs and API usage. Records then pass through the "Only Websites & Emails" filter, which uses strict validation to ensure both organization_website_url and email fields exist and contain data—eliminating invalid records before expensive AI processing occurs. Stage 2: Intelligent Web Scraping Valid leads enter the Loop Over Items batch processor, which handles them sequentially to manage API rate limits. For each lead, the workflow fetches their website homepage using the HTTP Request node with retry logic (5 attempts, 5-second waits) and "always output data" enabled to capture even failed requests. The If node checks response names for error indicators, if errors are detected, the problematic record is immediately deleted from the database via Delete row(s) to prevent future processing waste. Successfully scraped HTML content passes through the Markdown converter, which transforms it into clean markdown format that AI models can analyze more effectively. Stage 3: AI Content Analysis The markdown content flows into the first AI node, "Summarize Website Page," which uses GPT-4.1-mini (cost-efficient for summarization tasks) with a specialized system prompt. The AI reads the scraped content and generates a comprehensive two-paragraph abstract similar in detail to an academic paper abstract, focusing on what the business does, their projects, services, and unique differentiators. The output is structured JSON with an "abstract" field. Multiple page summaries (if the workflow is extended to scrape additional pages) are collected by the Aggregate node, which combines all abstracts into a single array for comprehensive analysis. Stage 4: Personalized Icebreaker Generation The aggregated summaries, along with prospect profile data (name, headline, company), flow into the "Generate Multiline Icebreaker" node powered by GPT-4.1 (higher intelligence for creative writing). This node uses an advanced system prompt with specific rules: write in a spartan/laconic tone, avoid special characters and hyphens, use the format "Really Loved {thing}, especially how you're {doing/managing/handling} {otherThing}," reference small non-obvious details (never generic compliments like "Love your website!"), shorten company names and locations naturally. The prompt includes a few-shot example teaching the AI the exact style and depth expected. Temperature is set to 0.5 for creative but consistent output. Stage 5: Campaign Deployment & Cleanup The generated icebreaker is formatted into Instantly.ai's API structure and sent via HTTP POST to the "Sending ice breaker to instantly" node. The payload includes the lead's email, first name, last name, company name, the personalized icebreaker as the "personalization" field, website URL, and supports custom_variables for additional personalization fields. The API call uses skip_if_in_campaign: true to prevent duplicate additions. After successful campaign addition, the Delete row(s)1 node removes the processed record from your data table, maintaining a clean queue. The Loop Over Items node then processes the next lead until all 30 are complete. Required Setup & Database Structure n8n Data Table Requirements: Table Name: Configurable (default "Real estate") Required Columns: id (unique identifier for each record) first_name (prospect's first name) last_name (prospect's last name) email (valid email address) organization_website_url (full URL with https://) Headline (job title/company descriptor) Email_Status (filter field for processing control) API Credentials: OpenAI API Key (connected as "Sycorda" credential) Access to GPT-4.1-mini model Access to GPT-4.1 model Sufficient credits for batch processing (approximately $0.01-0.03 per lead) Instantly.ai API Key Campaign ID (replace the placeholder "00000000-0000-0000-0000-000000000000") Active campaign with proper email accounts configured Environment Setup: n8n instance with @n8n/n8n-nodes-langchain package installed Stable internet connection for web scraping Adequate execution timeout limits (recommended 5+ minutes for 30 leads) Business Use Cases B2B Service Providers: Agencies, consultancies, and professional services firms can personalize outreach by referencing prospect's specific service offerings, client types, or operational approach to book discovery calls and consultations. SaaS Companies: Software vendors across any vertical can use this to demonstrate product value through highly relevant cold outreach that references prospect pain points, tech stack, or business model visible on their websites. Marketing & Creative Agencies: Agencies offering design, content creation, SEO, or digital marketing services can personalize outreach by referencing prospects' current marketing approach, website quality, or brand positioning. E-commerce & Retail: Online retailers and D2C brands can reach potential wholesale partners, distributors, or B2B clients by mentioning their product lines, target markets, or unique value propositions. Financial Services: Fintech companies, accounting firms, and financial advisors can personalize cold outreach by referencing prospect's business size, industry focus, or financial complexity to offer relevant solutions. Recruitment & Staffing: Agencies can reach potential clients by mentioning their hiring needs, company growth, team structure, or industry specialization visible on career pages and about sections. Technology & Development: Software development agencies, IT consultancies, and tech vendors can reference prospect's current technology stack, digital transformation initiatives, or technical challenges to position relevant solutions. Education & Training: Corporate training providers, coaching services, and educational platforms can personalize outreach by mentioning company culture, team development focus, or learning initiatives referenced on websites. Revenue Potential Same icebreaker approach used by leading cold email experts delivers 4-5% higher reply rates compared to generic outreach templates. By investing approximately $0.11-0.18 per personalized lead (AI processing + email sending costs), businesses achieve response rates of 4-5% versus the industry standard non-personalized campaigns. Scalability: Process 30 leads (or any much you want just replace the number 30 with your number) and in minutes with minimal manual oversight, allowing sales teams to maintain high personalization quality while reaching hundreds of prospects weekly. The automation handles the research-intensive work, letting your team focus on high-value conversations with engaged prospects. Difficulty Level & Build Time Difficulty: Intermediate Estimated Build Time: 2-3 hours for complete setup Technical Requirements: Familiarity with n8n node configuration Basic understanding of API integrations JSON data structure knowledge OpenAI prompt engineering basics Setup Complexity Breakdown: Data table creation and population: 30 minutes Workflow node configuration: 45 minutes OpenAI credential setup and testing: 20 minutes Instantly.ai API integration: 25 minutes Prompt optimization and testing: 45 minutes Error handling verification: 15 minutes Maintenance Requirements: Minimal once configured. Monthly tasks include monitoring OpenAI costs, updating prompts based on performance data, and refilling the data table with new leads. Detailed Setup Steps Step 1: Create Your Data Table In n8n, navigate to your project Create a new data table with a name relevant to your industry Add columns: id (auto), first_name (text), last_name (text), email (text), organization_website_url (text), Headline (text), Email_Status (text) Import your lead list via CSV or manual entry Set Email_Status to blank or a specific value you'll filter by Step 2: Configure OpenAI Credentials Obtain an OpenAI API key from platform.openai.com In n8n, go to Credentials → Add Credential → OpenAI Name it "Sycorda" (or update all OpenAI nodes with your credential name) Paste your API key and test the connection Ensure your OpenAI account has access to GPT-4.1 models Step 3: Import and Configure the Workflow Copy the provided workflow JSON In n8n, create a new workflow and paste the JSON Update the "Get row(s)" node: Select your data table Configure the Email_Status filter condition Adjust limit if needed (default 30) Verify the "Loop Over Items" node has reset: false Step 4: Configure Website Scraping In "Request web page for URL1" node, verify: URL expression references correct field: {{ $('Get row(s)').item.json.organization_website_url }} Retry settings: 5 attempts, 5000ms wait "Always Output Data" is enabled Test with a single lead to verify HTML retrieval Step 5: Customize AI Prompts for Your Industry In "Summarize Website Page" node: Review the system prompt Adjust the abstract detail level if needed Keep JSON output enabled In "Generate Multiline Icebreaker" node: CRITICAL: Update the few-shot example with your target industry specifics Customize the tone guidance to match your brand voice Modify the icebreaker format template if desired Adjust temperature (0.5 default; lower for consistency, higher for variety) Update the profile format to match your industry (change "Property Manager or Real estate" references) Step 6: Set Up Instantly.ai Integration Log into your Instantly.ai account Navigate to Settings → API Key and copy your key Create or select the campaign where leads will be added Copy the Campaign ID from the URL (format: 00000000-0000-0000-0000-000000000000) In the "Sending ice breaker to instantly" node: Update the JSON body with your api_key Replace the campaign_id placeholder Adjust skip_if_in_workspace and skip_if_in_campaign flags Map the lead fields correctly: email: {{ $('Loop Over Items').item.json.email }} first_name: {{ $('Loop Over Items').item.json.first_name }} last_name: {{ $('Loop Over Items').item.json.last_name }} personalization: {{ $json.message.content.icebreaker }} company_name: Extract from Headline or add to data table website: {{ $('Loop Over Items').item.json.organization_website_url }} Step 7: Test and Validate Start with 3-5 test leads in your data table Execute the workflow manually Verify each stage: Data retrieval from table Website scraping success AI summary generation Icebreaker quality and format Instantly.ai lead addition Database cleanup Check your Instantly.ai campaign to confirm leads appear with custom variables Review error handling by including one lead with an invalid website Step 8: Scale and Monitor Increase batch size in the Limit node (30 → 50+ if needed) Add more leads to your data table Set up execution logs to monitor costs Track response rates in Instantly.ai A/B test prompt variations to optimize icebreaker performance Consider scheduling automatic execution with n8n's Schedule Trigger Advanced Customization Options Multi-Page Scraping: Extend the workflow to scrape additional pages (about, services, portfolio) by adding multiple HTTP Request nodes after the first scrape, then modify the Aggregate node to combine all page summaries before icebreaker generation. Industry-Specific Prompts: Create separate workflow versions with customized prompts for different verticals or buyer personas to maximize relevance and response rates for each segment. Dynamic Campaign Routing: Add Switch or If nodes after icebreaker generation to route leads to different Instantly.ai campaigns based on company size, location, or detected business focus from the AI analysis. Sentiment Analysis: Insert an additional OpenAI node after summarization to analyze the prospect's website tone and adjust your icebreaker style accordingly (formal vs. casual, technical vs. conversational). CRM Integration: Replace or supplement the data table with direct CRM integration (HubSpot, Salesforce, Pipedrive) to pull leads and push results back, creating a fully automated lead enrichment pipeline. Competitor Mention Detection: Add a specialized prompt to the summarization phase that identifies if prospects mention competitors or specific pain points, then use this intelligence in the icebreaker for even higher relevance. LinkedIn Profile Enrichment: Add Clay or Clearbit integration before the workflow to enrich email lists with LinkedIn profile data, then reference recent posts or career changes in the icebreaker alongside website insights. A/B Testing Framework: Duplicate the "Generate Multiline Icebreaker" node with different prompt variations and use a randomizer to split leads between versions, then track performance in Instantly.ai to identify the highest-converting approach. Webhook Trigger: Replace the manual trigger with a webhook that fires when new leads are added to your data table or CRM, creating a fully automated lead-to-campaign pipeline that requires zero manual intervention. Cost Optimization: Replace GPT-4.1 models with GPT-4o-mini or Claude models for cost savings if response quality remains acceptable, or implement a tiered approach where only high-value leads get premium model processing.
by Dart
Automatically generate a meeting summary from your meetings through Fireflies.ai, save it to a Dart document, and create a review task with the meeting link attached. What it does This workflow activates when a Fireflies.ai meeting is processed (via a webhook). It retrieves the meeting transcript via the FirefliesAI transcript node and uses an AI model to generate a structured summary. Who’s it for Teams or individuals needing automatic meeting notes. Project managers tracking reviews and actions from meetings. Users of Fireflies.ai and Dart who want to streamline their documentation and follow-up process. How to set up Import the workflow into n8n. Connect your Dart account (it will need workspace and folder access). Add your PROD webhook link from the webhook node to your Fireflies.ai API settings. Add your Fireflies.ai API key on the Fireflies Transcript node. Replace the dummy Folder ID and Dartboard ID with your actual target IDs. Choose your preferred AI model for generating the summaries. Requirements n8n account Connected Dart account Connected Fireflies.ai account (with access to API key and webhooks) How to customize the workflow Edit the AI prompt to adjust the tone, style, or format of the meeting summaries. Add, remove, or change the summary sections to match your needs (e.g., Key takeaways, Action Items, Summary).
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template listens to an Onfleet event and communicates via a Whatsapp message. You can easily streamline this with the recipient of the delivery or your customer support numbers. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the Twilio node with your own Twilio credentials, add your own expressions to the to number or simply source the recipient's phone number from the Onfleet event Toggle To Whatsapp to OFF if you want to simply use Twilio's SMS API
by Vigh Sandor
Overview This n8n workflow provides automated CI/CD testing for Kubernetes applications using KinD (Kubernetes in Docker). It creates temporary infrastructure, runs tests, and cleans up everything automatically. Three-Phase Lifecycle INIT Phase - Infrastructure Setup Installs dependencies (sshpass, Docker, KinD) Creates KinD cluster Installs Helm and Nginx Ingress Installs HAProxy for port forwarding Deploys ArgoCD Applies ApplicationSet TEST Phase - Automated Testing Downloads Robot Framework test script from GitLab Installs Robot Framework and Browser library Executes automated browser tests Packages test results Sends results via Telegram DESTROY Phase - Complete Cleanup Removes HAProxy Deletes KinD cluster Uninstalls KinD Uninstalls Docker Sends completion notification Execution Modes Full Pipeline Mode (progress_only = false) > Automatically progresses through all phases: INIT → TEST → DESTROY Single Phase Mode (progress_only = true) > Executes only the specified phase and stops Prerequisites Local Environment (n8n Host) n8n instance version 1.0 or higher Community node n8n-nodes-robotframework installed Network access to target host and GitLab Minimum 4 GB RAM, 20 GB disk space Remote Target Host Linux server (Ubuntu, Debian, CentOS, Fedora, or Alpine) SSH access with sudo privileges Minimum 8 GB RAM (16 GB recommended) 20 GB** free disk space Open ports: 22, 80, 60080, 60443, 56443 External Services GitLab** account with OAuth2 application Repository with test files (test.robot, config.yaml, demo-applicationSet.yaml) Telegram Bot** for notifications Telegram Chat ID Setup Instructions Step 1: Install Community Node In n8n web interface, navigate to Settings → Community Nodes Install n8n-nodes-robotframework Restart n8n if prompted Step 2: Configure GitLab OAuth2 Create GitLab OAuth2 Application Log in to GitLab Navigate to User Settings → Applications Create new application with redirect URI: https://your-n8n-instance.com/rest/oauth2-credential/callback Grant scopes: read_api, read_repository, read_user Copy Application ID and Secret Configure in n8n Create new GitLab OAuth2 API credential Enter GitLab server URL, Client ID, and Secret Connect and authorize Step 3: Prepare GitLab Repository Create repository structure: your-repo/ ├── test.robot ├── config.yaml ├── demo-applicationSet.yaml └── .gitlab-ci.yml Upload your: Robot Framework test script KinD cluster configuration ArgoCD ApplicationSet manifest Step 4: Configure Telegram Bot Create Bot Open Telegram, search for @BotFather Send /newbot command Save the API token Get Chat ID For personal chat: Send message to your bot Visit: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates Copy the chat ID (positive number) For group chat: Add bot to group Send message mentioning the bot Visit getUpdates endpoint Copy group chat ID (negative number) Configure in n8n Create Telegram API credential Enter bot token Save credential Step 5: Prepare Target Host Verify SSH access: Test connection: ssh -p <port> <username>@<host_ip> Verify sudo: sudo -v The workflow will automatically install dependencies. Step 6: Import and Configure Workflow Import Workflow Copy workflow JSON In n8n, click Workflows → Import from File/URL Import the JSON Configure Parameters Open Set Parameters node and update: | Parameter | Description | Example | |-----------|-------------|---------| | target_host | IP address of remote host | 192.168.1.100 | | target_port | SSH port | 22 | | target_user | SSH username | ubuntu | | target_password | SSH password | your_password | | progress | Starting phase | INIT, TEST, or DESTROY | | progress_only | Execution mode | true or false | | KIND_CONFIG | Path to config.yaml | config.yaml | | ROBOT_SCRIPT | Path to test.robot | test.robot | | ARGOCD_APPSET | Path to ApplicationSet | demo-applicationSet.yaml | > Security: Use n8n credentials or environment variables instead of storing passwords in the workflow. Configure GitLab Nodes For each of the three GitLab nodes: Set Owner (username or organization) Set Repository name Set File Path (uses parameter from Set Parameters) Set Reference (branch: main or master) Select Credentials (GitLab OAuth2) Configure Telegram Nodes Send ROBOT Script Export Pack node: Set Chat ID Select Credentials Process Finish Report node: Update chat ID in command Step 7: Test and Execute Test individual components first Run full workflow Monitor execution (30-60 minutes total) How to Use Execution Examples Complete Testing Pipeline progress = "INIT" progress_only = "false" Flow: INIT → TEST → DESTROY Setup Infrastructure Only progress = "INIT" progress_only = "true" Flow: INIT → Stop Test Existing Infrastructure progress = "TEST" progress_only = "false" Flow: TEST → DESTROY Cleanup Only progress = "DESTROY" Flow: DESTROY → Complete Trigger Methods 1. Manual Execution Open workflow in n8n Set parameters Click Execute Workflow 2. Scheduled Execution Open Schedule Trigger node Configure time (default: 1 AM daily) Ensure workflow is Active 3. Webhook Trigger Configure webhook in GitLab repository Add webhook URL to GitLab CI Monitoring Execution In n8n Interface: View progress in Executions tab Watch node-by-node execution Check output details Via Telegram: Receive test results after TEST phase Receive completion notification after DESTROY phase Execution Timeline: | Phase | Duration | |-------|----------| | INIT | 15-25 minutes | | TEST | 5-10 minutes | | DESTROY | 5-10 minutes | Understanding Test Results After TEST phase, receive testing-export-pack.tar.gz via Telegram containing: log.html - Detailed test execution log report.html - Test summary report output.xml - Machine-readable results screenshots/ - Browser screenshots To view: Download .tar.gz from Telegram Extract: tar -xzf testing-export-pack.tar.gz Open report.html for summary Open log.html for detailed steps Success indicators: All tests marked PASS Screenshots show expected UI states No error messages in logs Failure indicators: Tests marked FAIL Error messages in logs Unexpected UI states in screenshots Configuration Files test.robot Robot Framework test script structure: Uses Browser library Connects to http://autotest.innersite Logs in with autotest/autotest Takes screenshots Runs in headless Chromium config.yaml KinD cluster configuration: 1 control-plane node** 1 worker node** Port mappings: 60080 (HTTP), 60443 (HTTPS), 56443 (API) Kubernetes version: v1.30.2 demo-applicationSet.yaml ArgoCD Application manifest: Points to Git repository Automatic sync enabled Deploys to default namespace gitlab-ci.yml Triggers n8n workflow on commits: Installs curl Sends POST request to webhook Troubleshooting SSH Permission Denied Symptoms: Error: Permission denied (publickey,password) Solutions: Verify password is correct Check SSH authentication method Ensure user has sudo privileges Use SSH keys instead of passwords Docker Installation Fails Symptoms: Error: Package docker-ce is not available Solutions: Check OS version compatibility Verify network connectivity Manually add Docker repository KinD Cluster Creation Timeout Symptoms: Error: Failed to create cluster: timed out Solutions: Check available resources (RAM/CPU/disk) Verify Docker daemon status Pre-pull images Increase timeout ArgoCD Not Accessible Symptoms: Error: Failed to connect to autotest.innersite Solutions: Check HAProxy status: systemctl status haproxy Verify /etc/hosts entry Check Ingress: kubectl get ingress -n argocd Test port forwarding: curl http://127.0.0.1:60080 Robot Framework Tests Fail Symptoms: Error: Chrome failed to start Solutions: Verify Chromium installation Check Browser library: rfbrowser show-trace Ensure correct executablePath in test.robot Install missing dependencies Telegram Notification Not Received Symptoms: Workflow completes but no message Solutions: Verify Chat ID Test Telegram API manually Check bot status Re-add bot to group Workflow Hangs Symptoms: Node shows "Executing..." indefinitely Solutions: Check n8n logs Test SSH connection manually Verify target host status Add timeouts to commands Best Practices Development Workflow Test locally first Run Robot Framework tests on local machine Verify test script syntax Version control Keep all files in Git Use branches for experiments Tag stable versions Incremental changes Make small testable changes Test each change separately Backup data Export workflow regularly Save test results Store credentials securely Production Deployment Separate environments Dev: Frequent testing Staging: Pre-production validation Production: Stable scheduled runs Monitoring Set up execution alerts Monitor host resources Track success/failure rates Disaster recovery Document cleanup procedures Keep backup host ready Test restoration process Security Use SSH keys Rotate credentials quarterly Implement network segmentation Maintenance Schedule | Frequency | Tasks | |-----------|-------| | Daily | Review logs, check notifications | | Weekly | Review failures, check disk space | | Monthly | Update dependencies, test recovery | | Quarterly | Rotate credentials, security audit | Advanced Topics Custom Configurations Multi-node clusters: Add more worker nodes for production-like environments Configure resource limits Add custom port mappings Advanced testing: Load testing with multiple iterations Integration testing for full deployment pipeline Chaos engineering with failure injection Integration with Other Tools Monitoring: Prometheus for metrics collection Grafana for visualization Logging: ELK stack for log aggregation Custom dashboards CI/CD Integration: Jenkins pipelines GitHub Actions Custom webhooks Resource Requirements Minimum | Component | CPU | RAM | Disk | |-----------|-----|-----|------| | n8n Host | 2 | 4 GB | 20 GB | | Target Host | 4 | 8 GB | 20 GB | Recommended | Component | CPU | RAM | Disk | |-----------|-----|-----|------| | n8n Host | 4 | 8 GB | 50 GB | | Target Host | 8 | 16 GB | 50 GB | Useful Commands KinD List clusters: kind get clusters Get kubeconfig: kind get kubeconfig --name automate-tst Export logs: kind export logs --name automate-tst Docker List containers: docker ps -a --filter "name=automate-tst" Enter control plane: docker exec -it automate-tst-control-plane bash View logs: docker logs automate-tst-control-plane Kubernetes Get all resources: kubectl get all -A Describe pod: kubectl describe pod -n argocd <pod-name> View logs: kubectl logs -n argocd <pod-name> --follow Port forward: kubectl port-forward -n argocd svc/argocd-server 8080:80 Robot Framework Run tests: robot test.robot Run specific test: robot -t "Test Name" test.robot Generate report: robot --outputdir results test.robot Additional Resources Official Documentation n8n**: https://docs.n8n.io KinD**: https://kind.sigs.k8s.io ArgoCD**: https://argo-cd.readthedocs.io Robot Framework**: https://robotframework.org Browser Library**: https://marketsquare.github.io/robotframework-browser Community n8n Community**: https://community.n8n.io Kubernetes Slack**: https://kubernetes.slack.com ArgoCD Slack**: https://argoproj.github.io/community/join-slack Robot Framework Forum**: https://forum.robotframework.org Related Projects k3s**: Lightweight Kubernetes distribution minikube**: Local Kubernetes alternative Flux CD**: Alternative GitOps tool Playwright**: Alternative browser automation
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template listens to an Onfleet event and interacts with the QuickBooks API. You can easily streamline this with your QuickBooks invoices or other entities. Typically, you can create an invoice when an Onfleet task is created to allow your customers to pay ahead of an upcoming delivery. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the QuickBooks Online node with your QuickBooks credentials
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template automatically updates the tags for a Shopify Order when an Onfleet event occurs. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the Shopify node with your Shopify credentials and add your own tags to the Shopify Order