by Nalin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Complete account-based outreach automation with Octave context engine Who is this for? Revenue teams, account-based marketing professionals, and growth operators who want a complete, automated pipeline from account identification to contextualized outreach. Built for teams ready to move beyond fragmented point solutions to an integrated, context-aware GTM engine. What problem does this solve? Most GTM teams are flying blind with disconnected tools that can't talk to each other. You qualify accounts in one system, find contacts in another, research context manually, then hope your email sequences land. Each step loses context, and by the time you're writing outreach, you've forgotten why the account was qualified in the first place. Octave centralizes all this typically fragmented context - your ICP definitions, personas, value propositions, and business logic - so every agent operation can act on the same unified understanding of your market. This workflow demonstrates how Octave's agents work together seamlessly because they all share the same context foundation. What this workflow does Complete Account-to-Outreach Pipeline: This workflow demonstrates the full power of Octave's context engine by connecting multiple agent operations in a seamless flow. Unlike traditional tools that lose context at each handoff, Octave centralizes your business context - ICP definitions, personas, value propositions, competitive positioning - so every agent operates from the same unified understanding of your market. External Context Research: Gathers real-time external data about target accounts (job postings, news, funding, etc.) Processes this information to create runtime context for later use in outreach Establishes the "why reach out now" foundation for the entire workflow Company-Level Qualification: Uses Octave's company qualification to assess account fit against your specific offering Leverages Product and Segment-level fit criteria defined in your Library Filters out accounts that don't meet your qualification thresholds Ensures only high-potential accounts proceed through the workflow Intelligent Contact Discovery: Runs Octave's prospector agent on qualified accounts Finds relevant stakeholders based on responsibilities and business context, not just job titles Discovers multiple contacts per account for comprehensive coverage Maintains qualification context when identifying the right people Runtime Context Integration: Takes the external context gathered at the beginning and injects it into sequence generation Creates truly dynamic, timely outreach that references current company events Generates sequences that feel impossibly relevant and well-researched Multi-Contact Sequence Generation: Splits discovered contacts into individual records for processing Generates contextualized email sequences for each contact Maintains account-level context while creating contact-specific messaging Produces sequences (1-7 emails) that feel unmistakably meant for each person Automated Campaign Deployment: Automatically adds all qualified contacts with their contextualized sequences to email campaigns Maps dynamic content to campaign variables for seamless execution Maintains the context chain from qualification through delivery Setup Required Credentials: Octave API key and workspace access External data source API (job boards, news APIs, enrichment services, etc.) Email platform API key (Instantly.ai configured, easily adaptable) Optional: LLM credentials if using the example external research agent Step-by-Step Configuration: Set up Account Input Source: Replace your-webhook-path-here with a unique, secure path Configure your account source (CRM, website visitors, target lists) to send company data Ensure account data includes company name and domain for processing Configure External Context Research: Replace the example AI agent with your preferred external data source Set up connections to job boards, news APIs, or enrichment services Configure context gathering to find timely, relevant information about target accounts Set up Company Qualification Agent: Add your Octave API credentials Replace your-octave-company-qualification-agent-id with your actual agent ID Configure qualification criteria at Product and Segment levels in your Octave Library Configure Prospector Agent: Replace your-octave-prospector-agent-id with your actual prospector agent ID Define target personas and stakeholder types in your Octave Library Set contact discovery parameters for optimal coverage Set up Sequence Agent: Replace your-octave-sequence-agent-id with your actual sequence agent ID Configure runtime context integration for dynamic content Test sequence quality with the external context integration Configure Email Campaign Platform: Add your email platform API credentials Replace your-campaign-id-here with your actual campaign ID Ensure campaign supports custom variables for dynamic content Required Webhook Payload Format: { "body": { "companyName": "InnovateTech Solutions", "companyDomain": "innovatetech.com" } } How to customize External Context Sources: Replace the example research with your data sources: Job Board APIs:** Reference current hiring and team expansion News APIs:** Mention funding, product launches, or market expansion Enrichment Services:** Pull technology adoption, market changes, or competitive moves Social Monitoring:** Reference recent company posts or industry discussions Company Qualification: Configure qualification in your Octave company qualification agent: Product Level:** Define "good fit" and "bad fit" questions for your core offering Segment Level:** Set criteria for different market segments or use cases Qualification Thresholds:** Adjust the filter score based on your standards Contact Discovery: Customize prospecting in your Octave prospector agent: Target Personas:** Define which Library personas to prioritize Organizational Levels:** Focus on specific seniority levels or decision-making authority Contact Volume:** Adjust how many contacts to discover per qualified account Runtime Context Integration: Configure dynamic content injection: Context Definition:** Specify what external data represents in your sequences Usage Instructions:** Define how to incorporate context into messaging Email-Level Control:** Apply different context to different emails in sequences Sequence Generation: Customize email creation: Core Context (Library):** Define personas, use cases, and offering definitions Strategy (Playbooks):** Configure messaging frameworks and value propositions Writing Style (Agent):** Adjust tone, voice, and communication approach Campaign Integration: Adapt for different email platforms: Update API endpoints and authentication for your preferred platform Modify variable mapping for platform-specific requirements Adjust sequence formatting and length based on platform capabilities Use Cases Complete inbound lead processing from website visitor to qualified outreach Event-triggered account processing for funding announcements or hiring spikes Competitive displacement campaigns with account qualification and contact discovery Market expansion automation for entering new territories or segments Product launch outreach with context-aware targeting and messaging Customer expansion workflows for upselling within existing account bases
by Nalin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Generate dynamic email sequences with runtime context and external data Who is this for? Growth teams, sales development reps, and outbound marketers who want to reference specific, real-time information about prospects in their email sequences. Built for teams that have access to external data sources and want to create truly contextualized outreach that feels impossibly relevant. What problem does this solve? Most outbound sequences are static - they use the same messaging for everyone regardless of what's actually happening at the prospect's company right now. You might know they're hiring, launched a product, got funding, or expanded to new markets, but your email sequences can't dynamically reference these timely events. This workflow shows how to inject real-time external context into Octave's sequence generation, creating outreach that feels like you're personally monitoring each prospect's company. What this workflow does Lead Data & Context Collection: Receives lead information via webhook (firstName, companyName, companyDomain, profileURL, jobTitle) Uses external data sources to gather timely context about the prospect's company Example: AI agent researches current job postings to find roles they're actively hiring for Processes this context into structured data for sequence generation Runtime Context Integration: Feeds external context into Octave's sequence generation as "runtime context" Defines both WHAT the context is ("they are hiring a software engineer") and HOW to use it ("mention the role in the opening") Allows Octave to weave timely, relevant details into each email naturally Creates sequences that feel like personal research rather than mass outreach Dynamic Sequence Generation: Leverages Octave's context engine plus runtime data to create hyper-relevant sequences (1-7 emails) Generates subject lines and email content that reference specific, current company context Maintains your positioning and value prop while incorporating timely relevance Creates messaging that feels unmistakably meant for that specific moment in the prospect's business Campaign Integration: Automatically adds leads with contextualized sequences to your email platform Maps generated content to campaign variables for automated sending Supports multiple email platforms with easy customization Setup Required Credentials: Octave API key and workspace access External data source API (job boards, news APIs, enrichment services, etc.) Email platform API key (Instantly.ai configured, easily adaptable) Optional: LLM credentials if using the example AI agent for testing Step-by-Step Configuration: Set up External Data Source: Replace the AI Agent with your preferred data source (job board APIs, news APIs, company databases) Configure data collection to find relevant, timely information about prospects Structure the output to provide clean context for sequence generation Set up Octave Sequence Agent: Add your Octave API credentials in n8n Replace your-octave-sequence-agent-id with your actual sequence agent ID Configure runtime context parameters: Runtime Context: Define WHAT the external data represents Runtime Instructions: Specify HOW to use it in the messaging Configure Email Platform: Add your email platform API credentials Replace your-campaign-id-here with your actual campaign ID Ensure campaign supports custom variables for dynamic content Set up Lead Source: Replace your-webhook-path-here with a unique, secure path Configure your lead source to send prospect data to the webhook Test end-to-end flow with sample leads Required Webhook Payload Format: { "body": { "firstName": "Alex", "lastName": "Chen", "companyName": "InnovateTech", "companyDomain": "innovatetech.com", "profileURL": "https://linkedin.com/in/alexchen", "email": "alex@innovatetech.com", "jobTitle": "VP of Engineering" } } How to customize External Data Sources: Replace the AI agent with your preferred context collection method: Job Board APIs:** Reference current hiring needs and team expansion News APIs:** Mention recent company announcements, funding, or product launches Social Media Monitoring:** Reference recent LinkedIn posts, company updates, or industry discussions Enrichment Services:** Pull real-time company data, technology stack changes, or market expansion Runtime Context Configuration: Customize how external data integrates with sequences: Context Definition:** Specify what the external data represents ("they just raised Series B funding") Usage Instructions:** Define how to incorporate it ("mention the funding in the opening and tie it to growth challenges") Email-Level Control:** Configure different context usage for different emails in the sequence Global vs. Specific:** Apply context to all emails or target specific messages Data Processing: Replace the example AI agent with your external data processing: Modify data source connections to pull relevant context Ensure consistent output formatting for runtime context integration Add error handling for cases where external data isn't available Implement fallback context for prospects without relevant external data Sequence Customization: Configure Octave sequence generation: Core Context (Library):** Define your personas, use cases, and offering definitions Strategy (Playbooks):** Configure messaging frameworks and value proposition delivery Writing Style (Agent):** Adjust tone, voice, and communication style Email Platform Integration: Adapt for different email sequencing platforms: Update API endpoints and authentication for your preferred platform Modify variable mapping for platform-specific custom fields Adjust sequence length and formatting requirements Use Cases Job posting-triggered outreach for hiring managers and HR teams Funding announcement follow-ups for growth-stage companies Product launch congratulations with relevant use case discussions Market expansion outreach when companies enter new territories Technology adoption sequences based on recent stack additions Event attendance follow-ups with session-specific references
by SpaGreen Creative
Send Automatic WhatsApp Order Confirmations from Shopify with Rapiwa API Who’s it for This n8n workflow helps Shopify store owners and teams automatically confirm orders via WhatsApp. It checks if the customer's number is valid using Rapiwa API, sends a personalized message, and logs every attempt in Google Sheets—saving time and reducing manual work. Whether you're a solo entrepreneur or managing a small team, this solution gives you a low-cost alternative to the official WhatsApp Business API, without losing control or personalization. Features Receives new order details via webhook upon order creation or update. Iterates over incoming data in manageable batches for smoother processing. Extracts and formats customer and order details from the Shopify webhook payload. Strips non-numeric characters from WhatsApp numbers for consistent formatting. Uses Rapiwa API to check if the WhatsApp number is valid and active. Branches the workflow based on number validity — separates verified from unverified. Sends a custom WhatsApp confirmation message to verified customers using Rapiwa. Updates Google Sheet rows with status and validity How it Works / What It Does Triggered by a Shopify webhook or by reading rows from a Google Sheet. Normalizes and cleans** the order payload. Extracts details like customer name, phone, items, shipping, and payment info. Cleans phone numbers (removes special characters). Verifies if the number is registered on WhatsApp via Rapiwa API. If valid: Sends a templated WhatsApp message. Updates Google Sheet with validity = verified and status = sent. If invalid: Skips sending. Updates sheet with validity = unverified and status = not sent. Adds wait/delay between sends to prevent rate limits. Keeps an audit trail in the connected Google Sheet. How to Set Up Set up a Shopify webhook for new orders (or connect a Google Sheet). Create a Google Sheet with columns: name, number, order id, item name, total price, validity, status Create and configure a Rapiwa Bearer token in n8n. Add Google Sheets OAuth2 credential in n8n. Import the workflow in n8n and configure these nodes: Webhook or Sheet Trigger Loop Over Items (SplitInBatches) Normalize Payload (Code) Clean WhatsApp Number (Code) Rapiwa WhatsApp Check (HTTP Request) Conditional Branch (If) Send WhatsApp Message (HTTP Request) Update Google Sheet (Google Sheets) Wait Node (delay per send) Requirements Shopify store with order webhook enabled (or order list in Google Sheet) A verified Rapiwa API token A working n8n instance with HTTP and Google Sheets nodes enabled A Google Sheet with required structure and valid OAuth credentials in n8n How to Customize the Workflow Modify the message template with your own brand tone or emojis. Add country-code logic in the Clean Number node if needed. Use a unique order id in your Google Sheet to prevent mismatches. Increase or decrease delay in the Wait node (e.g., 5–10 seconds). Use additional logic in Code nodes to handle discounts, promotions, or more line items. Workflow Highlights Triggered by Shopify webhook update. Receiving new order data form Shopify using webhook Cleans and extracts order data from raw payload. Normalizing and validating the customer’s WhatsApp number using the Rapiwa API Verifies WhatsApp number using Rapiwa's verify-whatsapp endpoint. Sends order confirmation via Rapiwa's send-message endpoint. Logs every result into Google Sheets (verified/unverified + sent/not sent). Setup in n8n 1. Check WhatsApp Registration Use an HTTP Request node: URL: https://app.rapiwa.com/api/verify-whatsapp Method: POST Auth: httpBearerAuth using your Rapiwa token Body: { "number": "cleaned_number" } 2. Branch Based on Validity Use an If node: Condition: {{ $json.data.exists }} == true (or "true" if string) 3. Send Message via Rapiwa Endpoint: https://app.rapiwa.com/api/send-message Method: POST Body: Hi {{ $json.customer_full_name }}, Thank you for shopping with SpaGreen Creative! We're happy to confirm that your order has been successfully placed. 🧾 Order Details • Product: {{ $json.line_item.title }} • SKU: {{ $json.line_item.sku }} • Quantity: {{ $json.line_item.quantity }} • Vendor: {{ $json.line_item.vendor }} • Order ID: {{ $json.name }} • Product ID: {{ $json.line_item.product_id }} 📦 Shipping Information {{ $json.shipping_address.address1 }} {{ $json.shipping_address.address2 }} {{ $json.shipping_address.city }}, {{ $json.shipping_address.country }} - {{ $json.shipping_address.zip }} 💳 Payment Summary • Subtotal: {{ $json.subtotal_price }} BDT • Tax (VAT): {{ $json.total_tax_amount }} BDT • Shipping: {{ $json.total_shipping_amount }} BDT • Discount: {{ $json.total_discount_amount }} BDT • Total Paid: {{ $json.total_price }} BDT Order Date: {{ $json.created_date }} Warm wishes, Team SpaGreen Creative Sample Google Sheet Structure A Google Sheet** formatted like this ➤ Sample | name | number | order id | item name | total price | validity | status | | ----------- | ------------- | ------------- | ------------------------------ | ----------- | -------- | ------ | | Abdul Mannan | 8801322827799| 8986469695806 | Iphone 10 | 1150 | verified | sent | | Abdul Mannan | 8801322827799| 8986469695806 | S25 UltraXXXXeen Android Phone | 23000 | verified | sent | Tips Always ensure phone numbers have a country code (e.g., 880 for BD). Clean numbers with regex: replace(/\D/g, '') Adjust Rapiwa API response parsing depending on actual structure (true vs "true"). Use row_number for sheet updates, or unique order id for better targeting. Use the Wait node to add 3–10 seconds between sends. Important Notes Avoid reordering sheet rows—updates rely on consistent row_number. shopify-app-auth is the credential name used in the export—make sure it's your Rapiwa token. Use a test sheet before going live. Rapiwa has request limits—avoid rapid sending. Add media/image logic later using message_type: media. Future Enhancements (Ideas) Add Telegram/Slack alert once the batch finishes. Include media (e.g., product image, invoice) in the message. Detect and resend failed messages. Integrate with Shopify’s GraphQL API for additional data. Auto-mark fulfillment status based on WhatsApp confirmation. Support & Community WhatsApp: 8801322827799 Discord: discord Facebook Group: facebook group Website: https://spagreen.net Envato/Codecanyon: codecanyon portfolio
by Budi SJ
Automated Local Business Lead Generator with AI, Social Media & WhatsApp Links This workflow automates the process of generating and managing local business leads by scraping Google Maps data, analyzing business information with AI, and creating personalized outreach messages. The system searches for local businesses based on keywords and locations, extracts their contact information and reviews, then generates professional outreach messages tailored to each business. Results are stored in Google Sheets and notifications are sent via Telegram. 📊 Google Sheets Template Use this template: Local Business Lead Generator 🔑 Key Features 🔍 Searches Google Maps for local businesses using SerpAPI based on keywords and location parameters 📋 Collects comprehensive business information including name, address, rating, reviews, phone numbers, and websites 🤖 Uses OpenRouter's LLM to analyze business data and generate personalized outreach messages 🌐 Analyzes business websites to provide targeted improvement suggestions 📱 Automatically detects Instagram and TikTok profiles from business websites 🗣️ Generates messages in the local language based on country code settings 📊 Organizes and stores all collected data in Google Sheets with proper categorization 🚀 Sends real-time updates and lead summaries via Telegram bot 💬 Automatically creates WhatsApp links for easy business communication 🔧 Requirements SerpAPI Account + API Key** – For Google Maps business data extraction OpenRouter Account + API Key** – For AI-powered message generation and analysis Telegram Bot + API Token** – For notifications and bot interactions Google Sheets** – Connected to n8n for data storage Google Sheets Template** – Pre-configured with proper column structure 🎁 Benefits Fully automated lead generation and outreach system Saves time and increases efficiency for local business marketing High personalization improves engagement and response rate Scalable for any niche or location
by Evoort Solutions
📺 Automated YouTube Video Metadata Extraction Workflow Description: This workflow leverages the YouTube Metadata API to automatically extract detailed video information from any YouTube URL. It uses n8n to automate the entire process and stores the metadata in a neatly formatted Google Docs document. Perfect for content creators, marketers, and researchers who need quick, organized YouTube video insights at scale. ⚙️ Node-by-Node Explanation 1. ✅ On Form Submission This node acts as the trigger. When a user submits a form containing a YouTube video URL, the workflow is activated. Input: YouTube Video URL Platform: Webhook or n8n Form Trigger 2. 🌐 YouTube Metadata API (HTTP Request) This node sends the video URL to the YouTube Metadata API via HTTP request. Action: GET request Headers: -H "X-RapidAPI-Key: YOUR_API_KEY" -H "X-RapidAPI-Host: youtube-metadata1.p.rapidapi.com" Endpoint Example: https://youtube-metadata1.p.rapidapi.com/video?url=YOUTUBE_VIDEO_URL Output: JSON with metadata like: Title Description Views, Likes, Comments Duration Upload Date Channel Info Thumbnails 3. 🧠 Reformat Metadata (Code Node) This node reformats the raw metadata into a clean, human-readable text block. Example Output Format: 🎬 Title: How to Build Workflows with n8n 🧾 Description: This tutorial explains how to build... 👤 Channel: n8n Tutorials 📅 Published On: 2023-05-10 ⏱️ Duration: 10 minutes, 30 seconds 👁️ Views: 45,678 👍 Likes: 1,234 💬 Comments: 210 🔗 URL: https://youtube.com/watch?v=abc123 4. 📝 Append to Google Docs This node connects to your Google Docs and appends the formatted metadata into a selected document. Document Format Example:** 📌 Video Entry – [Date] 🎬 Title: 🧾 Description: 👤 Channel: 📅 Published On: ⏱️ Duration: 👁️ Views: 👍 Likes: 💬 Comments: 🔗 URL: --- 📄 Use Cases Content Creators**: Quickly analyze competitor content or inspirations. Marketers**: Collect campaign video performance data. Researchers**: Compile structured metadata across videos. Social Media Managers**: Create content briefs effortlessly. ✅ Benefits 🚀 Time-saving: Automates manual video data extraction 📊 Accurate: Uses reliable, updated YouTube API 📁 Organized: Formats and stores data in Google Docs 🔁 Scalable: Handles unlimited YouTube URLs 🎯 User-friendly: Simple setup and clean output 🔑 How to Get Your API Key for YouTube Metadata API Go to the YouTube Metadata API on RapidAPI. Sign up or log in to your RapidAPI account. Click Subscribe to Test and choose a pricing plan (free or paid). Copy your API Key shown in the "X-RapidAPI-Key" section. Use it in your HTTP request headers. 🧰 Google Docs Integration – Full Setup Instructions 🔐 Step 1: Enable Google Docs API Go to the Google Cloud Console. Create a new project or select an existing one. Navigate to APIs & Services > Library. Search for Google Docs API and click Enable. Also enable Google Drive API (for document access). 🛠 Step 2: Create OAuth Credentials Go to APIs & Services > Credentials. Click Create Credentials > OAuth Client ID. Select Web Application or Desktop App. Add authorized redirect URIs if needed (e.g., for n8n OAuth). Save your Client ID and Client Secret. 🔗 Step 3: Connect n8n to Google Docs In n8n, go to Credentials > Google Docs API. Add new credentials using the Client ID and Secret from above. Authenticate with your Google account and allow access. 📘 Step 4: Create and Format Your Google Document Go to Google Docs and create a new document. Name it (e.g., YouTube Metadata Report). Optionally, add a title or table of contents. Copy the Document ID from the URL: https://docs.google.com/document/d/DOCUMENT_ID/edit 🔄 Step 5: Use Append Content to Document Node in n8n Use the Google Docs node in n8n with: Operation: Append Content Document ID: Your copied Google Doc ID Content: The formatted video summary string 🎨 Customization Options 💡 Add Tags: Insert hashtags or categories based on video topics. 📆 Organize by Date: Create headers for each day or week’s entries. 📸 Embed Thumbnails: Use thumbnail_url to embed preview images. 📊 Spreadsheet Export: Use Google Sheets instead of Docs if preferred. 🛠 Troubleshooting Tips | Issue | Solution | | ------------------------------ | ------------------------------------------------------------------- | | ❌ Auth Error (Google Docs) | Ensure correct OAuth redirect URI and permissions. | | ❌ API Request Fails | Check API key and request structure; test on RapidAPI's playground. | | 📄 Doc Not Updating | Verify Document ID and sharing permissions. | | 🧾 Bad Formatting | Debug the code node output using logging or console in n8n. | | 🌐 n8n Timeout | Consider using Wait or Split In Batches for large submissions. | 🚀 Ready to Launch? You can deploy this workflow in just minutes using n8n. 👉 Start Automating with n8n
by Rakin Jakaria
Use Cases Analyze e-commerce product pages for conversion optimization, audit SaaS landing pages for signup improvements, or evaluate marketing campaign pages for better lead generation. Good to know At time of writing, Google Gemini API calls have usage costs. See Google AI Pricing for current rates. The workflow analyzes publicly accessible pages only - pages behind login walls or with restricted access won't work. Analysis quality depends on page content structure - heavily image-based pages may receive limited text-based recommendations. How it works User submits a landing page URL through the form trigger interface. The HTTP Request node fetches the complete HTML content from the target landing page. Content is converted from HTML to markdown format for cleaner AI processing and better text extraction. Google Gemini 2.5 Flash analyzes the page using expert CRO knowledge and 2024 conversion best practices. The AI generates specific, actionable recommendations based on actual page content rather than generic advice. Information Extractor processes the analysis into 5 prioritized improvement tips with relevant visual indicators. Results are delivered through a completion form showing concrete steps to improve conversion rates. How to use The form trigger is configured for direct URL submission but can be replaced with webhook triggers for integration into existing websites or apps. Multiple pages can be analyzed sequentially, though each requires a separate workflow execution. Recommendations focus on high-impact changes that don't require heavy development work. Requirements Google Gemini (PaLM) API account for AI-powered analysis Publicly accessible landing pages for analysis N8N instance with proper webhook configuration Customizing this workflow CRO analysis can be tailored for specific industries by modifying the AI system prompt - try focusing on e-commerce checkout flows, SaaS trial conversions, or local business lead capture forms. Add competitive analysis by incorporating multiple URL inputs and comparative recommendations.
by Growth AI
French Public Procurement Tender Monitoring Workflow Overview This n8n workflow automates the monitoring and filtering of French public procurement tenders (BOAMP - Bulletin Officiel des Annonces des Marchés Publics). It retrieves tenders based on your preferences, filters them by market type, and identifies relevant opportunities using keyword matching. Who is this for? Companies seeking French public procurement opportunities Consultants monitoring specific market sectors Organizations tracking government contracts in France What it does The workflow operates in two main phases: Phase 1: Automated Tender Collection Retrieves all tenders from the BOAMP API based on your configuration Filters by market type (Works, Services, Supplies) Stores complete tender data in Google Sheets Handles pagination automatically for large datasets Phase 2: Intelligent Keyword Filtering Downloads and extracts text from tender PDF documents Searches for your specified keywords within tender content Saves matching tenders to a separate "Target" sheet for easy review Tracks processing status to avoid duplicates Requirements n8n instance (self-hosted or cloud) Google account with Google Sheets access Google Sheets API credentials configured in n8n Setup Instructions Step 1: Duplicate the Configuration Spreadsheet Access the template spreadsheet: Configuration Template Click File → Make a copy Save to your Google Drive Note the URL of your new spreadsheet Step 2: Configure Your Preferences Open your copied spreadsheet and configure the Config tab: Market Types - Check the categories you want to monitor: Travaux (Works/Construction) Services Fournitures (Supplies) Search Period - Enter the number of days to look back (e.g., "30" for the last 30 days) Keywords - Enter your search terms as a comma-separated list (e.g., "informatique, cloud, cybersécurité") Step 3: Import the Workflow Copy the workflow JSON from this template In n8n, click Workflows → Import from File/URL Paste the JSON and import Step 4: Update Google Sheets Connections Replace all Google Sheets node URLs with your spreadsheet URL: Nodes to update: Get config (2 instances) Get keyword Get Offset Get All Append row in sheet Update offset Reset Offset Ok Target offre For each node: Open the node settings Update the Document ID field with your spreadsheet URL Verify the Sheet Name matches your spreadsheet tabs Step 5: Configure Schedule Triggers The workflow has two schedule triggers: Schedule Trigger1 (Phase 1 - Tender Collection) Default: 0 8 1 * * (1st day of month at 8:00 AM) Adjust based on how frequently you want to collect tenders Schedule Trigger (Phase 2 - Keyword Filtering) Default: 0 10 1 * * (1st day of month at 10:00 AM) Should run after Phase 1 completes To modify: Open the Schedule Trigger node Click Cron Expression Adjust timing as needed Step 6: Test the Workflow Manually execute Phase 1 by clicking the Schedule Trigger1 node and selecting Execute Node Verify tenders appear in your "All" sheet Execute Phase 2 by triggering the Schedule Trigger node Check the "Target" sheet for matching tenders How the Workflow Works Phase 1: Tender Collection Process Configuration Loading - Reads your preferences from Google Sheets Offset Management - Tracks pagination position for API calls API Request - Fetches up to 100 tenders per batch from BOAMP Market Type Filtering - Keeps only selected market categories Data Storage - Formats and saves tenders to the "All" sheet Pagination Loop - Continues until all tenders are retrieved Offset Reset - Prepares for next execution Phase 2: Keyword Matching Process Keyword Loading - Retrieves search terms from configuration Tender Retrieval - Gets unprocessed tenders from "All" sheet Sequential Processing - Loops through each tender individually PDF Extraction - Downloads and extracts text from tender documents Keyword Analysis - Searches for matches with accent/case normalization Status Update - Marks tender as processed Match Evaluation - Determines if keywords were found Target Storage - Saves relevant tenders with match details Customization Options Adjust API Parameters In the HTTP Request node, you can modify: limit: Number of records per batch (default: 100) Additional filters in the where parameter Modify Keyword Matching Logic Edit the Get query node to adjust: Text normalization (accent removal, case sensitivity) Match proximity requirements Context length around matches Change Data Format Update the Format Results node to modify: Date formatting PDF URL generation Field mappings Spreadsheet Structure Your Google Sheets should contain these tabs: Config** - Your configuration settings Offset** - Pagination tracking (managed automatically) All** - Complete tender database Target** - Filtered tenders matching your keywords Troubleshooting No tenders appearing in "All" sheet: Verify your configuration period isn't too restrictive Check that at least one market type is selected Ensure API is accessible (test the HTTP Request node) PDF extraction errors: Some PDFs may be malformed or protected Check the URL generation in Format Results node Verify PDF URLs are accessible in a browser Duplicate tenders in Target sheet: Ensure the "Ok" status is being written correctly Check the Filter node is excluding processed tenders Verify row_number matching in update operations Keywords not matching: Keywords are case-insensitive and accent-insensitive Verify your keywords are spelled correctly Check the extracted text contains your terms Performance Considerations Phase 1 processes 100 tenders per iteration with a 10-second wait between batches Phase 2 processes tenders sequentially to avoid overloading PDF extraction Large datasets (1000+ tenders) may take significant time to process Consider running Phase 1 less frequently if tender volume is manageable Data Privacy All data is stored in your Google Sheets No external databases or third-party storage BOAMP API is publicly accessible (no authentication required) Ensure your Google Sheets permissions are properly configured Support and Updates This workflow retrieves data from the BOAMP public API. If API structure changes, nodes may require updates. Monitor the workflow execution logs for errors and adjust accordingly.
by Atta
This workflow automatically turns any YouTube video into a structured blog post with Gemini AI. By sending a simple POST request with a YouTube URL to a webhook, it downloads the video’s audio, transcribes the content, and generates a blog-ready article with a title, description, tags, and category. The final result, along with the full transcript and original video URL, is delivered to your chosen webhook or CMS. How it works: The workflow handles the entire process of transforming YouTube videos into complete blog posts using Gemini AI transcription and structured text generation. Once triggered, it: Downloads the video’s audio Transcribes the spoken content into text Generates a blog post in the same language as the video’s original language Creates: A clear and engaging title A short description Suggested category and tags The full transcript of the video The original YouTube video URL This makes it easy to repurpose video content into publish-ready articles in minutes. This template is ideal for content creators, marketers, educators, and bloggers who want to quickly turn video content into written posts without manual transcription or editing. Setup Instructions Install yt-dlp on your local machine or server where n8n runs. This is required to download YouTube audio. Get a Google Gemini API key and configure it in your AI nodes. Webhook Input Configuration: Endpoint: The workflow starts with a Webhook Trigger. Method: POST Example Request Body: { "videoUrl": "https://www.youtube.com/watch?v=lW5xEm7iSXk" } Configure Output Webhook: Add your target endpoint in the last node where the blog post JSON is sent. This could be your CMS, a Notion database, or another integration. Customization Guidance Writing Style:** Update the AI Agent’s prompt to adjust tone (e.g., casual, professional, SEO-optimized). Metadata:** Modify how categories and tags are generated to fit your website’s taxonomy. Integration:** Swap the final webhook with WordPress, Ghost, Notion, or Slack to fit your publishing workflow. Transcript Handling:** Save the full transcript separately if you also want searchable video archives.
by Oneclick AI Squad
This workflow builds a fully private, self-hosted AI chatbot using Meta Llama models. Unlike cloud-based AI APIs, every conversation stays on your infrastructure — no data leaves your environment. The chatbot remembers conversation history per session, routes different query types to specialized Llama prompts, logs all interactions, and can escalate unresolved queries to a human agent via Slack. Powered by Ollama (local) or Groq/Together AI (cloud Llama endpoints) — configurable in one node. What's the Goal? To give businesses a production-grade private AI chatbot that: Runs on their own servers with zero data exposure Handles customer support, internal helpdesk, sales FAQs, and onboarding Remembers context across a full conversation session Routes intelligently: support vs sales vs general vs escalation Logs every turn for quality review, training, and compliance Why Does It Matter? Most businesses cannot send sensitive conversations to OpenAI or Anthropic due to: GDPR, HIPAA, SOC2, or internal data governance policies Confidential customer data in support queries Proprietary internal knowledge that must stay private Llama models run fully on-premise. This workflow gives those businesses the same quality AI chatbot experience with complete data sovereignty. Monetization: sell this as a private AI chatbot deployment package to enterprises. Setup fee plus monthly hosting — recurring revenue. How It Works Stage A — Message Intake Webhook receives incoming chat message with session ID and user message text. Set node stores Llama endpoint config and normalizes the payload. Stage B — Session Memory Code node loads conversation history for the session from an in-memory store. Appends the new user message to build the full context window for Llama. Stage C — Intent Router IF node checks the message for keywords to classify intent: support issue, sales inquiry, general question, or escalation request. Routes to the matching Llama system prompt branch. Stage D — Llama Inference HTTP Request calls the Llama API (Ollama local, Groq, or Together AI). Sends full conversation history plus the matched system prompt. Returns the assistant reply. Stage E — Response Handling Code node parses the Llama output, updates the session memory, checks if escalation is needed, and formats the final response. Stage F — Logging and Delivery Google Sheets logs every turn. Slack fires only when escalation is flagged. Webhook responds with the chatbot reply and session metadata. Configuration Requirements LLAMA_ENDPOINT — Your Ollama URL (http://localhost:11434) or Groq/Together AI base URL LLAMA_API_KEY — API key if using Groq or Together AI (leave blank for local Ollama) LLAMA_MODEL — Model name e.g. llama3, llama3.1:8b, llama3.1:70b, mixtral SLACK_WEBHOOK_URL — For human escalation alerts GOOGLE_SHEET_ID — Conversation audit log Setup Guide Option A (Local / Private): Install Ollama: curl -fsSL https://ollama.ai/install.sh | sh Pull model: ollama pull llama3.1 Set LLAMA_ENDPOINT to http://localhost:11434 Leave LLAMA_API_KEY blank Option B (Cloud Llama via Groq — fastest): Sign up at groq.com and copy your API key Set LLAMA_ENDPOINT to https://api.groq.com/openai/v1 Set LLAMA_MODEL to llama-3.1-8b-instant or llama-3.1-70b-versatile Paste your Groq API key in LLAMA_API_KEY Option C (Together AI): Sign up at together.ai Set endpoint to https://api.together.xyz/v1 Set model to meta-llama/Llama-3.1-8B-Instruct-Turbo Steps for all options: Open Set Llama Config node — fill in all values Set SLACK_WEBHOOK_URL and GOOGLE_SHEET_ID Activate and POST to /webhook/llama-chat Sample Payload { sessionId: user-abc-123, message: My order arrived damaged and I need a refund, userId: user_123, botPersona: support, userName: Sarah } Explore More Automation: Contact us to design AI-powered lead nurturing, content engagement, and multi-platform reply workflows tailored to your growth strategy.
by Roger
Who it is for This workflow is designed for developers, content managers, and website administrators managing multilingual WordPress sites. It is highly beneficial for websites built with complex Advanced Custom Fields (ACF) or custom Gutenberg layouts that standard translation plugins often struggle to process efficiently. How it works When a post is created or updated, WordPress triggers a webhook to start the workflow. The workflow fetches the raw post JSON directly from the WordPress REST API. An OpenAI node analyzes a text snippet to detect the exact source language, ensuring it only routes and translates into missing target languages to prevent duplication. A code node then recursively extracts text from both standard fields and specific ACF fields into a single array. These strings are translated in bulk via DeepL, maintaining HTML formatting. Finally, the workflow rebuilds the original JSON structure with the translated text and pushes it back to WordPress as a newly linked translation draft. Requirements WordPress Application Password (for HTTP Basic Auth) OpenAI API Key DeepL API Key A WordPress plugin capable of firing webhooks on post updates (e.g., WP Webhooks) How to set up Configure the Webhook node and point your WordPress webhook plugin to the provided test/production URL. Add your HTTP Basic Auth credentials (WP Admin Username and App Password) to the WordPress request nodes. Add your OpenAI API key and DeepL API key to their respective nodes. Update the base URL in the HTTP nodes to point to your actual WordPress domain. How to customize Open the "Smart Router & Targets" node and update the target languages array to match your website's supported languages. Most importantly, open the "Extract Content" code node and modify the text keys array to perfectly match the field names used in your site's unique ACF configuration.
by WeblineIndia
Fill iOS localization gaps from .strings → Google Sheets and PR with placeholders (GitHub) This n8n workflow automatically identifies missing translations in .strings files across iOS localizations (e.g., Base.lproj vs fr.lproj) and generates a report in Google Sheets. Optionally, it creates a GitHub PR to insert placeholder strings ("TODO_TRANSLATE") so builds don't fail. Supports DRY\_RUN mode. Who’s it for iOS teams who want fast feedback on missing translations. Localization managers who want a shared sheet to assign work to translators. How it works A GitHub Webhook triggers on push or pull request. The iOS repo is scanned for .strings files under Base.lproj or en.lproj and their target-language counterparts. It compares keys and identifies what’s missing. A new or existing Google Sheet tab (e.g., fr) is updated with missing entries. If enabled, it creates a GitHub PR with placeholder keys (e.g., "TODO_TRANSLATE"). How to set up Import the Workflow JSON into your n8n instance. Set Config Node values like: { "GITHUB_OWNER": "your-github-user-name", "GITHUB_REPO": "your-iOS-repo-name", "BASE_BRANCH": "develop", "SHEET_ID": "<YOUR_GOOGLE_SHEET_ID>", "ENABLE_PR": "true", "IOS_SOURCE_GLOB": "/Base.lproj/*.strings,/en.lproj/*.strings", "IOS_TARGET_GLOB": "*/.lproj/*.strings", "PLACEHOLDER_VALUE": "TODO_TRANSLATE", "BRANCH_TEMPLATE": "chore/l10n-gap-{{YYYYMMDD}}", } Create GitHub Webhook URL: https://your-n8n-instance/webhook/l10n-gap-ios Content-Type: application/json Trigger on: Push, Pull Request Connect credentials GitHub token with repo scope Google Sheets API (Optional) Slack OAuth + SMTP Requirements | Tool | Needed For | Notes | | ---------------- | -------------------- | ---------------------------------------- | | GitHub Repo | Webhook, API for PRs | repo token or App | | Google Sheets | Sheet output | Needs valid SHEET_ID or create-per-run | | Slack (optional) | Notifications | chat:write scope | | SMTP (optional) | Email fallback | Standard SMTP creds | How to customize Multiple Locales**: Add comma-separated values to TARGET_LANGS_CSV (e.g., fr,de,es). Globs**: Adjust IOS_SOURCE_GLOB and IOS_TARGET_GLOB to scan only certain modules or file patterns. Ignore Rules**: Add IGNORE_KEY_PREFIXES_CSV to skip certain internal/debug strings. Placeholder Value**: Change PLACEHOLDER_VALUE to something meaningful like "@@@". Slack/Email**: Set SLACK_CHANNEL and EMAIL_FALLBACK_TO_CSV appropriately. DRY\_RUN**: Set to true to skip GitHub PR creation but still update the sheet. Add‑ons Android support:** Add a second path for strings.xml (values → values-<lang>), same diff → Sheets → placeholder PR. Multiple languages at once:** Expand TARGET_LANGS_CSV and loop tabs + placeholder commits per locale. .stringsdict handling:** Validate plural/format entries and open a precise PR. Translator DMs:** Provide a LANG → Slack handle/email map to DM translators with their specific file/key counts. GitLab/Bitbucket variants:** Replace GitHub API calls with GitLab/Bitbucket equivalents to open Merge Requests. Use Case Examples Before a test build, ensure fr has all keys present—placeholders keep the app compiling. Weekly run creates a single sheet for translators and a PR with placeholders, avoiding last‑minute breakages. A new screen adds 12 strings; the bot flags and pre‑fills them across locales. Common troubleshooting | Issue | Possible Cause | Solution | | ------------------------ | --------------------------------------------- | ------------------------------------------------------ | | No source files found | Glob doesn't match Base.lproj or en.lproj | Adjust IOS_SOURCE_GLOB | | Target file missing | fr.lproj doesn’t exist yet | Will be created in placeholder PR | | Parsing skips entries | Non-standard string format in file | Ensure proper .strings format "key" = "value"; | | Sheet not updating | SHEET_ID missing or insufficient permission | Add valid ID or allow write access | | PR not created | ENABLE_PR=false or no missing keys | Enable PR and ensure at least one key is missing | | Slack/Email not received | Missing credentials or config | Configure Slack/SMTP properly and set recipient fields | Need Help? Want to expand this for Android? Loop through 5+ locales at once? Or replace GitHub with GitLab? Contact our n8n Team at WeblineIndia with your repo & locale setup and we’ll help tailor it to your translation workflow!
by Anirudh Aeran
This workflow provides a complete backend solution for building your own WhatsApp marketing dashboard. It enables you to send dynamic, personalized, and rich-media broadcast messages to an entire contact list stored in Google Sheets. The system is built on three core functions: automatically syncing your approved Meta templates, providing an API endpoint for your front-end to fetch those templates, and a powerful broadcast engine that merges your contact data with the selected template for mass delivery. Who’s it for? This template is for marketers, developers, and businesses who want to run sophisticated WhatsApp campaigns without being limited by off-the-shelf tools. It's perfect for anyone who needs to send personalized bulk messages with dynamic content (like unique images or links for each user) and wants to operate from a simple, custom-built web interface. How it works This workflow is composed of three independent, powerful parts: Automated Template Sync: A scheduled trigger runs periodically to fetch all of your approved message templates directly from your Meta Business Account. It then clears and updates an n8n Data Table, ensuring your list of available templates is always perfectly in sync with Meta. Front-end API Endpoint: A dedicated webhook acts as an API for your dashboard. When your front-end calls this endpoint, it returns a clean JSON list of all available templates from the n8n Data Table, which you can use to populate a dropdown menu for the user. Dynamic Broadcast Engine: The main webhook listens for a request from your front-end, which includes the name of the template to send. It then: Looks up the template's structure in the Data Table. Fetches all contacts from your Google Sheet. For each contact, a Code node dynamically constructs a personalized API request. It can merge the contact's name into the body, add a unique user ID to a button's URL, and even pull a specific image URL from your Google Sheet to use as a dynamic header. Sends the fully personalized message to the contact. How to set up Pre-requisite - Front-end: This workflow is a backend and is designed to be triggered by a front-end application. You will need a simple UI with a dropdown to select a template and a button to trigger the broadcast. Meta for Developers: You need a Meta App with the WhatsApp Business API configured. From your app, you will need your WhatsApp Business Account ID, a Phone Number ID, and a permanent System User Access Token. n8n Data Table: Create an n8n Data Table (e.g., named "WhatsApp Templates") with the following columns: template_name, language_code, components_structure, template_id, status, category. Google Sheet: Create a Google Sheet to store your contacts. It must have columns like Phone Number, Full Name, and for dynamic images, Marketing Image URL. Configure Credentials: -> Create an HTTP Header Auth credential in n8n for WhatsApp. Use Authorization as the Header Name and Bearer YOUR_PERMANENT_TOKEN as the value. -> Add your Google Sheets credentials. Configure Nodes: -> In both HTTP Request nodes, select your WhatsApp Header Auth credential. Update the URLs with your own Phone Number ID and WABA ID. -> In the Google Sheets node, select your credential and enter the Sheet ID. -> In all Data Table nodes, select the Data Table you created. First Run: Manually execute the "Sync Meta Templates" flow (starting with the Schedule Trigger) once to populate your Data Table with your templates. Activate: Activate all parts of the workflow. Requirements A Meta for Developers account with a configured WhatsApp Business App. A permanent System User Access Token for the WhatsApp Business API. A Google Sheets account. A front-end application/dashboard to trigger the workflow.