by Friedemann Schuetz
Welcome to my VEO3 Video Generator Workflow! This automated workflow transforms simple text descriptions into professional 8-second videos using Google's cutting-edge VEO3 AI model. Users submit video ideas through a web form, and the system automatically generates optimized prompts, creates high-quality videos with native audio, and delivers them via Google Drive - all powered by Claude 4 Sonnet for intelligent prompt optimization. This workflow has the following sequence: VEO3 Generator Form - Web form interface for users to input video content, format, and duration Video Prompt Generator - AI agent powered by Claude 4 Sonnet that: Analyzes user input for video content requirements Creates factual, professional video titles Generates detailed VEO3 prompts with subject, context, action, style, camera motion, composition, ambiance, and audio elements Optimizes prompts for 16:9 landscape format and 8-second duration Create VEO3 Video - Submits the optimized prompt to fal.ai VEO3 API for video generation Wait 30 seconds - Initial waiting period for video processing to begin Check VEO3 Status - Monitors the video generation status via fal.ai API Video completed? - Decision node that checks if video generation is finished If not completed: Returns to wait cycle If completed: Proceeds to video retrieval Get VEO3 Video URL - Retrieves the final video download URL from fal.ai Download VEO3 Video - Downloads the generated MP4 video file Merge - Combines video data with metadata for final processing Save Video to Google Drive - Uploads the video to specified Google Drive folder Video Output - Displays completion message with Google Drive link to user The following accesses are required for the workflow: Anthropic API** (Claude 4 Sonnet): Documentation Fal.ai API** (VEO3 Model): Create API key at https://fal.ai/dashboard/keys Google Drive API**: Documentation Workflow Features: User-friendly web form**: Simple interface for video content input AI-powered prompt optimization**: Claude 4 Sonnet creates professional VEO3 prompts Automatic video generation**: Leverages Google's VEO3 model via fal.ai Status monitoring**: Real-time tracking of video generation progress Google Drive integration**: Automatic upload and sharing of generated videos Structured output**: Consistent video titles and professional prompt formatting Audio optimization**: VEO3's native audio generation with ambient sounds and music Current Limitations: Format**: Only 16:9 landscape videos supported Duration**: Only 8-second videos supported Processing time**: Videos typically take 60-120 seconds to generate Use Cases: Content creation**: Generate videos for social media, websites, and presentations Marketing materials**: Create promotional videos and advertisements Educational content**: Produce instructional and explanatory videos Prototyping**: Rapid video concept development and testing Creative projects**: Artistic and experimental video generation Business presentations**: Professional video content for meetings and pitches Feel free to contact me via LinkedIn, if you have any questions!
by vinci-king-01
Lead Scoring Pipeline with Mattermost and Trello This workflow automatically enriches incoming form-based leads, calculates a lead-score from multiple data points, and then routes high-value prospects to a Mattermost alert channel while adding all leads to Trello for further handling. It centralizes lead intelligence and streamlines sales team triage—no manual spreadsheet work required. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed Active Trello and Mattermost workspaces Lead-capture form or webhook that delivers JSON payloads Required Credentials Trello API Key & Token** – Access to the board/list where cards will be created Mattermost Access Token** – Permission to post messages in the target channel (Optional) Clearbit / Apollo / 3rd-party enrichment keys** – If you replace the sample enrichment HTTP requests Specific Setup Requirements | Variable | Purpose | Example Value | |-------------------------|-------------------------------------------|------------------------| | MM_CHANNEL_ID | Mattermost channel to post high-score leads | leads-alerts | | TRELLO_BOARD_ID | Board where new cards are added | 62f1d… | | TRELLO_LIST_ID_HOT | Trello list for hot leads | Hot Deals | | TRELLO_LIST_ID_BACKLOG| Trello list for all other leads | New Leads | | LEAD_SCORE_THRESHOLD | Score above which a lead is considered hot| 70 | How it works This workflow grabs new leads at a defined interval, enriches each lead with external data, computes a custom score, and routes the lead: high-scorers trigger a Mattermost alert and are placed in a “Hot Deals” list, while the rest are stored in a “Backlog” list on Trello. All actions are fully automated and run unattended once configured. Key Steps: Schedule Trigger**: Runs every 15 minutes to poll for new form submissions. HTTP Request – Fetch Leads**: Retrieves the latest unprocessed leads from your form backend or CRM API. Split In Batches**: Processes leads 20 at a time to respect API rate limits. HTTP Request – Enrich Lead**: Calls external enrichment (e.g., Clearbit) to append company and person data. Code – Calculate Score**: JavaScript that applies weightings to enriched attributes and outputs a numeric score. IF – Score Threshold**: Branches flow based on LEAD_SCORE_THRESHOLD. Mattermost Node**: Sends a rich-text message with lead details for high-score prospects. Trello Node (Hot List)**: Creates a Trello card in the “Hot Deals” list for high-value leads. Trello Node (Backlog)**: Creates a Trello card in the “New Leads” list for everyone else. Merge & Flag Processed**: Marks leads as processed to avoid re-processing in future runs. Set up steps Setup Time: 10–15 minutes Import the Workflow: Download the JSON template and import it into n8n. Create / Select Credentials: Add your Trello API key & token under Trello API credentials. Add your Mattermost personal access token under Mattermost API credentials. Configure Environment Variables: Set MM_CHANNEL_ID, TRELLO_BOARD_ID, TRELLO_LIST_ID_HOT, TRELLO_LIST_ID_BACKLOG, and LEAD_SCORE_THRESHOLD in n8n → Settings → Environment. Form Backend Endpoint: Update the first HTTP Request node with the correct URL and authentication for your form or CRM. (Optional) Enrichment Provider: Replace the sample enrichment HTTP Request with your chosen provider’s endpoint and credentials. Test Run: Execute the workflow manually with a sample payload to ensure Trello cards and Mattermost messages are produced. Activate: Enable the workflow; it will now run on the defined schedule. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Triggers workflow every 15 minutes. HTTP Request (Fetch Leads)** – Pulls unprocessed leads. SplitInBatches** – Limits processing to 20 leads per batch. HTTP Request (Enrich Lead)** – Adds firmographic & technographic data. Code (Calculate Score)** – JavaScript scoring algorithm; outputs score field. IF (Score ≥ Threshold)** – Determines routing path. Mattermost** – Sends formatted message with lead summary & score. Trello (Create Card)** – Adds lead as a card to the appropriate list. Merge (Flag Processed)** – Updates source system to mark lead as processed. Data Flow: Schedule Trigger → HTTP Request (Fetch Leads) → SplitInBatches → HTTP Request (Enrich Lead) → Code (Calculate Score) → IF IF (Yes) → Mattermost → Trello (Hot List) IF (No) → Trello (Backlog) Both branches → Merge (Flag Processed) Customization Examples Adjust Scoring Weights // Code node: adjust weights to change scoring logic const weights = { industry: 15, companySize: 25, jobTitle: 20, intentSignals: 40 }; Dynamic Trello List Mapping // Use a Lookup table instead of IF node const mapping = { hot: 'TRELLO_LIST_ID_HOT', cold: 'TRELLO_LIST_ID_BACKLOG' }; items[0].json.listId = mapping[items[0].json.segment]; return items; Data Output Format The workflow outputs structured JSON data: { "leadId": "12345", "email": "jane.doe@example.com", "score": 82, "priority": "hot", "trelloCardUrl": "https://trello.com/c/abc123", "mattermostPostId": "78yzk9n8ppgkkp" } Troubleshooting Common Issues Trello authentication fails – Ensure the token has write access and that the API key & token pair belong to the same Trello account. Mattermost message not sent – Confirm the token can post in the target channel and that MM_CHANNEL_ID is correct. Performance Tips Batch leads in groups of 20–50 to avoid enrichment API rate-limit errors. Cache enrichment responses for repeat domains to reduce API calls. Pro Tips: Add a second IF node to send ultra-high (>90) scores directly to an account executive via email. Store raw enrichment responses in a database for future analytics. Use n8n’s built-in Execution Data Save to debug edge-cases without rerunning external API calls.
by explorium
Research Agent - Automated Sales Meeting Intelligence This n8n workflow automatically prepares comprehensive sales research briefs every morning for your upcoming meetings by analyzing both the companies you're meeting with and the individual attendees. The workflow connects to your calendar, identifies external meetings, enriches companies and contacts with deep intelligence from Explorium, and delivers personalized research reports—giving your sales team everything they need for informed, confident conversations. DEMO Template Demo Credentials Required To use this workflow, set up the following credentials in your n8n environment: Google Calendar (or Outlook) Type:** OAuth2 Used for:** Reading daily meeting schedules and identifying external attendees Alternative: Microsoft Outlook Calendar Get credentials at Google Cloud Console Explorium API Type:** Generic Header Auth Header:** Authorization Value:** Bearer YOUR_API_KEY Used for:** Business/prospect matching, firmographic enrichment, professional profiles, LinkedIn posts, website changes, competitive intelligence Get your API key at Explorium Dashboard Explorium MCP Type:** HTTP Header Auth Used for:** Real-time company intelligence and supplemental research for AI agents Connect to: https://mcp.explorium.ai/mcp Anthropic API Type:** API Key Used for:** AI-powered company and attendee research analysis Get your API key at Anthropic Console Slack (or preferred output) Type:** OAuth2 Used for:** Delivering research briefs Alternative options: Google Docs, Email, Microsoft Teams, CRM updates Go to Settings → Credentials, create these credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: Schedule Trigger Automatically runs the workflow on a recurring schedule. Type:** Schedule Trigger Default:** Every morning before business hours Customizable:** Set to any interval (hourly, daily, weekly) or specific times Alternative Trigger Options: Manual Trigger:** On-demand execution Webhook:** Triggered by calendar events or CRM updates Node 2: Get many events Retrieves meetings from your connected calendar. Calendar Source:** Google Calendar (or Outlook) Authentication:** OAuth2 Time Range:** Current day + 18 hours (configurable via timeMax) Returns:** All calendar events with attendee information, meeting titles, times, and descriptions Node 3: Filter for External Meetings Identifies meetings with external participants and filters out internal-only meetings. Filtering Logic: Extracts attendee email domains Excludes your company domain (e.g., 'explorium.ai') Excludes calendar system addresses (e.g., 'resource.calendar.google.com') Only passes events with at least one external attendee Important Setup Note: Replace 'explorium.ai' in the code node with your company domain to properly filter internal meetings. Output: Events with external participants only external_attendees: Array of external contact emails company_domains: Unique list of external company domains per meeting external_attendee_count: Number of external participants Company Research Pipeline Node 4: Loop Over Items Iterates through each meeting with external attendees for company research. Node 5: Extract External Company Domains Creates a deduplicated list of all external company domains from the current meeting. Node 6: Explorium API: Match Business Matches company domains to Explorium's business entity database. Method:** POST Endpoint:** /v1/businesses/match Authentication:** Header Auth (Bearer token) Returns: business_id: Unique Explorium identifier matched_businesses: Array of matches with confidence scores Company name and basic info Node 7: If Validates that a business match was found before proceeding to enrichment. Condition:** business_id is not empty If True:** Proceed to parallel enrichment nodes If False:** Skip to next company in loop Nodes 8-9: Parallel Company Enrichment Node 8: Explorium API: Business Enrich Endpoints:** /v1/businesses/firmographics/enrich, /v1/businesses/technographics/enrich Enrichment Types:** firmographics, technographics Returns:** Company name, description, website, industry, employees, revenue, headquarters location, ticker symbol, LinkedIn profile, logo, full tech stack, nested tech stack by category, BI & analytics tools, sales tools, marketing tools Node 9: Explorium API: Fetch Business Events Endpoint:** /v1/businesses/events/fetch Event Types:** New funding rounds, new investments, mergers & acquisitions, new products, new partnerships Date Range:** September 1, 2025 - November 4, 2025 Returns:** Recent business milestones and financial events Node 10: Merge Combines enrichment responses and events data into a single data object. Node 11: Cleans Merge Data Output Transforms merged enrichment data into a structured format for AI analysis. Node 12: Company Research Agent AI agent (Claude Sonnet 4) that analyzes company data to generate actionable sales intelligence. Input: Structured company profile with all enrichment data Analysis Focus: Company overview and business context Recent website changes and strategic shifts Tech stack and product focus areas Potential pain points and challenges How Explorium's capabilities align with their needs Timely conversation starters based on recent activity Connected to Explorium MCP: Can pull additional real-time intelligence if needed to create more detailed analysis Node 13: Create Company Research Output Formats the AI analysis into a readable, shareable research brief. Attendee Research Pipeline Node 14: Create List of All External Attendees Compiles all unique external attendee emails across all meetings. Node 15: Loop Over Items2 Iterates through each external attendee for individual enrichment. Node 16: Extract External Company Domains1 Extracts the company domain from each attendee's email. Node 17: Explorium API: Match Business1 Matches the attendee's company domain to get business_id for prospect matching. Method:** POST Endpoint:** /v1/businesses/match Purpose:** Link attendee to their company Node 18: Explorium API: Match Prospect Matches attendee email to Explorium's professional profile database. Method:** POST Endpoint:** /v1/prospects/match Authentication:** Header Auth (Bearer token) Returns: prospect_id: Unique professional profile identifier Node 19: If1 Validates that a prospect match was found. Condition:** prospect_id is not empty If True:** Proceed to prospect enrichment If False:** Skip to next attendee Node 20: Explorium API: Prospect Enrich Enriches matched prospect using multiple Explorium endpoints. Enrichment Types:** contacts, profiles, linkedin_posts Endpoints:** /v1/prospects/contacts/enrich, /v1/prospects/profiles/enrich, /v1/prospects/linkedin_posts/enrich Returns: Contacts:** Professional email, email status, all emails, mobile phone, all phone numbers Profiles:** Full professional history, current role, skills, education, company information, experience timeline, job titles and seniority LinkedIn Posts:** Recent LinkedIn activity, post content, engagement metrics, professional interests and thought leadership Node 21: Cleans Enrichment Outputs Structures prospect data for AI analysis. Node 22: Attendee Research Agent AI agent (Claude Sonnet 4) that analyzes prospect data to generate personalized conversation intelligence. Input: Structured professional profile with activity data Analysis Focus: Career background and progression Current role and responsibilities Recent LinkedIn activity themes and interests Potential pain points in their role Relevant Explorium capabilities for their needs Personal connection points (education, interests, previous companies) Opening conversation starters Connected to Explorium MCP: Can gather additional company or market context if needed Node 23: Create Attendee Research Output Formats attendee analysis into a readable brief with clear sections. Node 24: Merge2 Combines company research output with attendee information for final assembly. Node 25: Loop Over Items1 Manages the final loop that combines company and attendee research for output. Node 26: Send a message (Slack) Delivers combined research briefs to specified Slack channel or user. Alternative Output Options: Google Docs:** Create formatted document per meeting Email:** Send to meeting organizer or sales rep Microsoft Teams:** Post to channels or DMs CRM:** Update opportunity/account records with research PDF:** Generate downloadable research reports Workflow Flow Summary Schedule: Workflow runs automatically every morning Fetch Calendar: Pull today's meetings from Google Calendar/Outlook Filter: Identify meetings with external attendees only Extract Companies: Get unique company domains from external attendees Extract Attendees: Compile list of all external contacts Company Research Path: Match Companies: Identify businesses in Explorium database Enrich (Parallel): Pull firmographics, website changes, competitive landscape, events, and challenges Merge & Clean: Combine and structure company data AI Analysis: Generate company research brief with insights and talking points Format: Create readable company research output Attendee Research Path: Match Prospects: Link attendees to professional profiles Enrich (Parallel): Pull profiles, job changes, and LinkedIn activity Merge & Clean: Combine and structure prospect data AI Analysis: Generate attendee research with background and approach Format: Create readable attendee research output Delivery: Combine: Merge company and attendee research for each meeting Send: Deliver complete research briefs to Slack/preferred platform This workflow eliminates manual pre-meeting research by automatically preparing comprehensive intelligence on both companies and individuals—giving sales teams the context and confidence they need for every conversation. Customization Options Calendar Integration Works with multiple calendar platforms: Google Calendar:** Full OAuth2 integration Microsoft Outlook:** Calendar API support CalDAV:** Generic calendar protocol support Trigger Flexibility Adjust when research runs: Morning Routine:** Default daily at 7 AM On-Demand:** Manual trigger for specific meetings Continuous:** Hourly checks for new meetings Enrichment Depth Add or remove enrichment endpoints: Company:** Technographics, funding history, news mentions, hiring signals Prospects:** Contact information, social profiles, company changes Customizable:** Select only needed data to optimize speed and costs Research Scope Configure what gets researched: All External Meetings:** Default behavior Filtered by Keywords:** Only meetings with specific titles By Attendee Count:** Only meetings with X+ external attendees By Calendar:** Specific calendars only Output Destinations Deliver research to your preferred platform: Messaging:** Slack, Microsoft Teams, Discord Documents:** Google Docs, Notion, Confluence Email:** Gmail, Outlook, custom SMTP CRM:** Salesforce, HubSpot (update account notes) Project Management:** Asana, Monday.com, ClickUp AI Model Options Swap AI providers based on needs: Default: Anthropic Claude (Sonnet 4) Alternatives: OpenAI GPT-4, Google Gemini Setup Notes Domain Configuration: Replace 'explorium.ai' in the Filter for External Meetings code node with your company domain Calendar Connection: Ensure OAuth2 credentials have calendar read permissions Explorium Credentials: Both API key and MCP credentials must be configured Output Timing: Schedule trigger should run with enough lead time before first meetings Rate Limits: Adjust loop batch sizes if hitting API rate limits during enrichment Slack Configuration: Select destination channel or user for research delivery Data Privacy: Research is based on publicly available professional information and company data This workflow acts as your automated sales researcher, preparing detailed intelligence reports every morning so your team walks into every meeting informed, prepared, and ready to have meaningful conversations that drive business forward.
by vinci-king-01
Product Price Monitor with Slack and Jira ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes multiple e-commerce sites, analyses weekly seasonal price trends, and notifies your team in Slack while opening Jira tickets for items that require price adjustments. It helps retailers plan inventory and pricing by surfacing actionable insights every week. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed Slack workspace & channel for notifications Jira Software project (cloud or server) Basic JavaScript knowledge for optional custom code edits Required Credentials ScrapeGraphAI API Key** – Enables web scraping Slack OAuth Access Token** – Required by the Slack node Jira Credentials** – Email & API token (cloud) or username & password (server) (Optional) Proxy credentials – If target websites block direct scraping Specific Setup Requirements | Resource | Purpose | Example | |----------|---------|---------| | Product URL list | Seed URLs to monitor | https://example.com/products-winter-sale | | Slack Channel | Receives trend alerts | #pricing-alerts | | Jira Project Key | Tickets are created here | ECOM | How it works This workflow automatically scrapes multiple e-commerce sites, analyses weekly seasonal price trends, and notifies your team in Slack while opening Jira tickets for items that require price adjustments. It helps retailers plan inventory and pricing by surfacing actionable insights every week. Key Steps: Webhook Trigger**: Kicks off the workflow via a weekly schedule or manual call. Set Product URLs**: Prepares the list of product pages to analyse. SplitInBatches**: Processes URLs in manageable batches to avoid rate limits. ScrapeGraphAI**: Extracts current prices, stock, and seasonality hints from each URL. Code (Trend Logic)**: Compares scraped prices against historical averages. If (Threshold Check)**: Determines if price deviations exceed ±10%. Slack Node**: Sends a formatted message to the pricing channel for each deviation. Jira Node**: Creates/updates a ticket linking to the product for further action. Merge**: Collects all batch results for summary reporting. Set up steps Setup Time: 15-20 minutes Install Community Nodes: In n8n, go to Settings → Community Nodes, search for “ScrapeGraphAI”, and install. Add Credentials: a. Slack → Credentials → New, paste your Bot/User OAuth token. b. Jira → Credentials → New, enter your domain, email/username, API token/password. c. ScrapeGraphAI → Credentials → New, paste your API key. Import Workflow: Upload or paste the JSON template into n8n. Edit the “Set Product URLs” Node: Replace placeholder URLs with your real product pages. Configure Schedule: Replace the Webhook Trigger with a Cron node (e.g., every Monday 09:00) or keep as webhook for manual runs. Map Jira Fields: In the Jira node, ensure Project Key, Issue Type (e.g., Task), and Summary fields match your instance. Test Run: Execute the workflow. Confirm Slack message appears and a Jira issue is created. Activate: Toggle the workflow to Active so it runs automatically. Node Descriptions Core Workflow Nodes: Webhook** – Default trigger, can be swapped with Cron for weekly automation. Set (Product URLs)** – Stores an array of product links for scraping. SplitInBatches** – Limits each ScrapeGraphAI call to five URLs to reduce load. ScrapeGraphAI** – Crawls and parses HTML, returning JSON with title, price, availability. Code (Trend Logic)** – Calculates percentage change vs. historical data (stored externally or hard-coded for demo). If (Threshold Check)** – Routes items above/below the set variance. Slack** – Posts a rich-format message containing product title, old vs. new price, and link. Jira* – Creates or updates a ticket with priority set to *Medium and assigns to the Pricing team lead. Merge** – Recombines batch streams for optional reporting or storage. Data Flow: Webhook → Set (Product URLs) → SplitInBatches → ScrapeGraphAI → Code (Trend Logic) → If → Slack / Jira → Merge Customization Examples Change Price Deviation Threshold // Code (Trend Logic) node const threshold = 0.05; // 5% instead of default 10% Alter Slack Message Template { "text": ${item.name} price changed from $${item.old} to $${item.new} (${item.diff}%)., "attachments": [ { "title": "Product Link", "title_link": item.url, "color": "#4E79A7" } ] } Data Output Format The workflow outputs structured JSON data: { "product": "Winter Jacket", "url": "https://example.com/winter-jacket", "oldPrice": 129.99, "newPrice": 99.99, "change": -23.06, "scrapedAt": "2023-11-04T09:00:00Z", "status": "Below Threshold", "slackMsgId": "A1B2C3", "jiraIssueKey": "ECOM-101" } Troubleshooting Common Issues ScrapeGraphAI returns empty data – Verify selectors; many sites use dynamic rendering, require a headless browser flag. Slack message not delivered – Check that the OAuth token scopes include chat:write; also confirm channel ID. Jira ticket creation fails – Field mapping mismatch; ensure Issue Type is valid and required custom fields are supplied. Performance Tips Batch fewer URLs (e.g., 3 instead of 5) to reduce timeout risk. Cache historical prices in an external DB (Postgres, Airtable) instead of reading large CSVs in the Code node. Pro Tips: Rotate proxies/IPs within ScrapeGraphAI to bypass aggressive e-commerce anti-bot measures. Add a Notion or Sheets node after Merge for historical logging. Use the Error Trigger workflow in n8n to alert when ScrapeGraphAI fails more than X times per run.
by Surya Vardhan Yalavarthi
Submit a research topic through a form and receive a professionally styled executive report in your inbox — fully automated, with built-in scraping resilience. The workflow searches Google via SerpApi, scrapes each result with Jina.ai (free, no key needed), and uses Claude to extract key findings. If a page is blocked by a CAPTCHA or login wall, it automatically retries with Firecrawl. Blocked sources are gracefully skipped after two attempts. Once all sources are processed, Claude synthesises a structured executive report and delivers it as a styled HTML email via Gmail. How it works A web form collects the research topic, number of sources (5–7), and recipient email SerpApi searches Google and returns a buffer of results (2× requested + 3 to survive domain filtering) Junk domains are filtered out automatically (Reddit, YouTube, Twitter, PDFs, etc.) Each URL is processed one at a time in a serial loop: Round 1 — Jina.ai: free Markdown scraper, no API key required Claude checks the content — if it's a CAPTCHA or wall, it returns RETRY_NEEDED Round 2 — Firecrawl: paid fallback scraper retries the blocked URL If still blocked, the source is marked as unavailable and the loop continues All extracted findings are aggregated and Claude writes a structured executive report (Executive Summary, Key Findings, Detailed Analysis, Data & Evidence, Conclusions, Sources) The report is converted to styled HTML (with tables, headings, and lists) and emailed to the recipient Setup steps Required credentials | Service | Where to get it | Where to paste it | |---|---|---| | SerpApi | serpapi.com — free tier: 100 searches/month | SerpApi Search node → query param api_key | | Firecrawl | firecrawl.dev — free tier: 500 pages/month | Firecrawl (Fallback) node → Authorization header | | Anthropic | n8n credentials → Anthropic API | Connect to: Claude Extractor, Claude Re-Analyzer, Claude Synthesizer | | Gmail | n8n credentials → Gmail OAuth2 | Connect to: Send Gmail | Error handler (optional) The workflow includes a built-in error handler that captures the failed node name, error message, and execution URL. To activate it: Workflow Settings → Error Workflow → select this workflow. Add a Slack or Gmail node after Format Error to receive failure alerts. Nodes used n8n Form Trigger** — collects topic, source count, and recipient email HTTP Request** × 3 — SerpApi (Google Search), Jina.ai (primary scraper), Firecrawl (fallback scraper) Code** × 6 — URL filtering, response normalisation, prompt assembly, HTML rendering Split In Batches** — serial loop (one URL at a time, prevents rate limit collisions) IF** × 2 — CAPTCHA/block detection after each scrape attempt Wait** — 3-second pause before Firecrawl retry Basic LLM Chain** × 3 — page analysis (×2) and report synthesis (×1), all powered by Claude Aggregate** — collects all per-URL findings before synthesis Gmail** — sends the final HTML report Error Trigger + Set** — error handler sub-flow Notes Jina.ai is free and works without an API key for most public pages Firecrawl is only called when Jina is blocked — most runs won't consume Firecrawl credits SerpApi fetches numSources × 2 + 3 results to ensure enough survive domain filtering Claude model is set to claude-sonnet-4-5 — swap to any Anthropic model in the three Claude nodes The HTML email renders markdown tables, headings, lists, and bold correctly in Gmail
by DataMinex
Transform property searches into personalized experiences! This powerful automation delivers dream home matches straight to clients' inboxes with professional CSV reports - all from a simple web form. 🚀 What this workflow does Create a complete real estate search experience that works 24/7: ✨ Smart Web Form - Beautiful property search form captures client preferences 🧠 Dynamic SQL Builder - Intelligently creates optimized queries from user input ⚡ Lightning Database Search - Scans 1000+ properties in milliseconds 📊 Professional CSV Export - Excel-ready reports with complete property details 📧 Automated Email Delivery - Personalized emails with property previews and attachments 🎯 Perfect for: Real Estate Agents** - Generate leads and impress clients with instant service Property Managers** - Automate tenant matching and recommendations Brokerages** - Provide 24/7 self-service property discovery Developers** - Showcase available properties with professional automation 💡 Why this workflow is a game-changer > "From property search to professional report delivery in under 30 seconds!" ⚡ Instant Results: Zero wait time for property matches 🎨 Professional Output: Beautiful emails that showcase your expertise 📱 Mobile Optimized: Works flawlessly on all devices 🧠 Smart Filtering: Only searches criteria clients actually specify 📈 Infinitely Scalable: Handles unlimited searches simultaneously 📊 Real Estate Data Source Built on authentic US market data from the Github: 🏘️ 1000+ Real Properties across all US states 💰 Actual Market Prices from legitimate listings 🏠 Complete Property Details (bedrooms, bathrooms, square footage, lot size) 📍 Verified Locations with accurate cities, states, and ZIP codes 🏢 Broker Information for authentic real estate context 🛠️ Quick Setup Guide Prerequisites Checklist ✅ [ ] SQL Server database (MySQL/PostgreSQL also supported) [ ] Gmail account for automated emails [ ] n8n instance (cloud or self-hosted) [ ] 20 minutes setup time Step 1: Import Real Estate Data 📥 🌟 Download the data 💾 Download CSV file (1000+ properties included) 🗄️ Create SQL Server table with this exact schema: CREATE TABLE [REALTOR].[dbo].[realtor_usa_price] ( brokered_by BIGINT, status NVARCHAR(50), price DECIMAL(12,2), bed INT, bath DECIMAL(3,1), acre_lot DECIMAL(10,8), street BIGINT, city NVARCHAR(100), state NVARCHAR(50), zip_code INT, house_size INT, prev_sold_date NVARCHAR(50) ); 📊 Import your CSV data into this table Step 2: Configure Database Connection 🔗 🔐 Set up Microsoft SQL Server credentials in n8n ✅ Test connection to ensure everything works 🎯 Workflow is pre-configured for the table structure above Step 3: Gmail Setup (The Magic Touch) 📧 🌐 Visit Google Cloud Console 🆕 Create new project (or use existing) 🔓 Enable Gmail API in API Library 🔑 Create OAuth2 credentials (Web Application) ⚙️ Add your n8n callback URL to authorized redirects 🔗 Configure Gmail OAuth2 credentials in n8n ✨ Authorize your Google account Step 4: Launch Your Property Search Portal 🚀 📋 Import this workflow template (form is pre-configured) 🌍 Copy your webhook URL from the Property Search Form node 🔍 Test with a sample property search 📨 Check email delivery with CSV attachment 🎉 Go live and start impressing clients! 🎨 Customization Playground 🏷️ Personalize Your Brand // Customize email subjects in the Gmail node "🏠 Exclusive Properties Curated Just for You - ${results.length} Perfect Matches!" "✨ Your Dream Home Portfolio - Handpicked by Our Experts" "🎯 Hot Market Alert - ${results.length} Premium Properties Inside!" 🔧 Advanced Enhancements 🎨 HTML Email Templates**: Create stunning visual emails with property images 📊 Analytics Dashboard**: Track popular searches and user engagement 🔔 Smart Alerts**: Set up automated price drop notifications 📱 Mobile Integration**: Connect to React Native or Flutter apps 🤖 AI Descriptions**: Add ChatGPT for compelling property descriptions 🌍 Multi-Database Flexibility // Easy database switching // MySQL: Replace Microsoft SQL node → MySQL node // PostgreSQL: Swap for PostgreSQL node // MongoDB: Use MongoDB node with JSON queries // Even CSV files: Use CSV reading nodes for smaller datasets 🚀 Advanced Features & Extensions 🔥 Pro Tips for Power Users 🔄 Bulk Processing**: Handle multiple searches simultaneously 💾 Smart Caching**: Store popular searches for lightning-fast results 📈 Lead Scoring**: Track which properties generate most interest 📅 Follow-up Automation**: Schedule nurturing email sequences 🎯 Integration Possibilities 🏢 CRM Connection**: Auto-add qualified leads to your CRM 📅 Calendar Integration**: Add property viewing scheduling 📊 Price Monitoring**: Track market trends and price changes 📱 Social Media**: Auto-share featured properties to social platforms 💬 Chat Integration**: Connect to WhatsApp or SMS for instant alerts 🔗 Expand Your Real Estate Automation 🌟 Related Workflow Ideas 🤖 AI Property Valuation - Add machine learning for price predictions 📊 Market Analysis Reports - Generate comprehensive market insights 📱 SMS Property Alerts - Instant text notifications for hot properties 🏢 Commercial Property Search - Adapt for office and retail spaces 💹 Investment ROI Calculator - Add financial analysis for investors 🏘️ Neighborhood Analytics - Include school ratings and demographics 🛠️ Technical Extensions 📷 Image Processing: Auto-resize and optimize property photos 🗺️ Map Integration: Add interactive property location maps 📱 Progressive Web App: Create mobile app experience 🔔 Push Notifications: Real-time alerts for saved searches 🚀 Get Started Now Import this workflow template Configure your database and Gmail Customize branding and messaging Launch your professional property search portal Watch client satisfaction soar!
by Kevin Meneses
What this workflow does This workflow automatically audits web pages for SEO issues and generates an executive-friendly SEO report using AI. It is designed for marketers, founders, and SEO teams who want fast, actionable insights without manually reviewing HTML, meta tags, or SERP data. The workflow: Reads URLs from Google Sheets Scrapes page content using Decodo (reliable scraping, even on protected sites) Extracts key SEO elements (title, meta description, canonical, H1/H2, visible text) Uses an AI Agent to analyze the page and generate: Overall SEO status Top issues Quick wins Title & meta description recommendations Saves results to Google Sheets Sends a formatted HTML executive report by email (Gmail) Who this workflow is for This template is ideal for: SEO consultants and agencies SaaS marketing teams Founders monitoring their landing pages Content teams doing SEO quality control It focuses on on-page SEO fundamentals, not backlink analysis or technical crawling. Setup (step by step) 1. Google Sheets Create an input sheet with one URL per row Create an output sheet to store SEO results 2. Decodo Add your Decodo API credentials The URL is automatically taken from the input sheet 👉 Decodo – Web Scraper for n8n 3. AI Agent Connect your LLM credentials (OpenAI, Gemini, etc.) The prompt is already optimized for non-technical SEO summaries 4. Gmail Connect your Gmail account Set the recipient email address Emails are sent in clean HTML format Notes & disclaimer This is a community template Results depend on page accessibility and content structure It focuses on on-page SEO, not backlinks or rankings
by vinci-king-01
Property Listing Aggregator with Mailchimp and Notion ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow scrapes multiple commercial-real-estate websites, consolidates new property listings into Notion, and emails weekly availability updates or immediate space alerts to a Mailchimp audience. It automates the end-to-end process so business owners can stay on top of the latest spaces without manual searching. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed Active Notion workspace with permission to create/read databases Mailchimp account with at least one Audience list Basic understanding of JSON; ability to add API credentials in n8n Required Credentials ScrapeGraphAI API Key** – Enables web scraping functionality Notion OAuth2 / Integration Token** – Writes data into Notion database Mailchimp API Key** – Sends campaigns and individual emails (Optional) Proxy credentials – If target real-estate sites block your IP Specific Setup Requirements | Resource | Requirement | Example | |----------|-------------|---------| | Notion | Database with property fields (Address, Price, SqFt, URL, Availability) | Database ID: abcd1234efgh | | Mailchimp | Audience list where alerts are sent | Audience ID: f3a2b6c7d8 | | ScrapeGraphAI | YAML/JSON config per site | Stored inside the ScrapeGraphAI node | How it works This workflow scrapes multiple commercial-real-estate websites, consolidates new property listings into Notion, and emails weekly availability updates or immediate space alerts to a Mailchimp audience. It automates the end-to-end process so business owners can stay on top of the latest spaces without manual searching. Key Steps: Manual Trigger / CRON**: Starts the workflow weekly or on-demand. Code (Site List Builder)**: Generates an array of target URLs for ScrapeGraphAI. Split In Batches**: Processes URLs in manageable groups to avoid rate limits. ScrapeGraphAI**: Extracts property details from each site. IF (New vs Existing)**: Checks whether the listing already exists in Notion. Notion**: Inserts new listings or updates existing records. Set**: Formats email content (HTML & plaintext). Mailchimp**: Sends a campaign or automated alert to subscribers. Sticky Notes**: Provide documentation and future-enhancement pointers. Set up steps Setup Time: 15-25 minutes Install Community Node Navigate to Settings → Community Nodes and install “ScrapeGraphAI”. Create Notion Integration Go to Notion Settings → Integrations → Develop your own integration. Copy the integration token and share your target database with the integration. Add Mailchimp API Key In Mailchimp: Account → Extras → API keys. Copy an existing key or create a new one, then add it to n8n credentials. Build Scrape Config In the ScrapeGraphAI node, paste a YAML/JSON selector config for each website (address, price, sqft, url, availability). Configure the URL List Open the first Code node. Replace the placeholder array with your target listing URLs. Map Notion Fields Open the Notion node and map scraped fields to your database properties. Save. Design Email Template In the Set node, tweak the HTML and plaintext blocks to match your brand. Test the Workflow Trigger manually, check that Notion rows are created and Mailchimp sends the message. Schedule Add a CRON node (weekly) or leave the Manual Trigger for ad-hoc runs. Node Descriptions Core Workflow Nodes: Manual Trigger / CRON** – Kicks off the workflow either on demand or on a schedule. Code (Site List Builder)** – Holds an array of commercial real-estate URLs and outputs one item per URL. Split In Batches** – Prevents hitting anti-bot limits by processing URLs in groups (default: 5). ScrapeGraphAI** – Crawls each URL, parses DOM with CSS/XPath selectors, returns structured JSON. IF (New Listing?)** – Compares scraped listing IDs against existing Notion database rows. Notion** – Creates or updates pages representing property listings. Set (Email Composer)** – Builds dynamic email subject, body, and merge tags for Mailchimp. Mailchimp* – Uses the *Send Campaign endpoint to email your audience. Sticky Note** – Contains inline documentation and customization reminders. Data Flow: Manual Trigger/CRON → Code (URLs) → Split In Batches → ScrapeGraphAI → IF (New?) True path → Notion (Create) → Set (Email) → Mailchimp False path → (skip) Customization Examples Filter Listings by Maximum Budget // Inside the IF node (custom expression) {{$json["price"] <= 3500}} Change Email Frequency to Daily Digests { "nodes": [ { "name": "Daily CRON", "type": "n8n-nodes-base.cron", "parameters": { "triggerTimes": [ { "hour": 8, "minute": 0 } ] } } ] } Data Output Format The workflow outputs structured JSON data: { "address": "123 Market St, Suite 400", "price": 3200, "sqft": 950, "url": "https://examplebroker.com/listing/123", "availability": "Immediate", "new": true } Troubleshooting Common Issues Scraper returns empty objects – Verify selectors in ScrapeGraphAI config; inspect the site’s HTML for changes. Duplicate entries in Notion – Ensure the “IF” node checks a unique ID (e.g., listing URL) before creating a page. Performance Tips Reduce batch size or add delays in ScrapeGraphAI to avoid site blocking. Cache previously scraped URLs in an external file or database for faster runs. Pro Tips: Rotate proxies in ScrapeGraphAI for heavily protected sites. Use Notion rollups to calculate total available square footage automatically. Leverage Mailchimp merge tags (|FNAME|) in the Set node for personalized alerts.
by Omer Fayyaz
Automatically discover and extract article URLs from any website using AI to identify valid content links while filtering out navigation, category pages, and irrelevant content—perfect for building content pipelines, news aggregators, and research databases. What Makes This Different: AI-Powered Intelligence** - Uses GPT-5-mini to understand webpage context and identify actual articles vs navigation pages, eliminating false positives Browser Spoofing** - Includes realistic User-Agent headers and request patterns to avoid bot detection on publisher sites Smart URL Normalization* - Automatically strips tracking parameters (utm_, fbclid, etc.), removes duplicates, and standardizes URLs Source Categorization** - AI assigns logical source names based on domain and content type for easy filtering Rate Limiting Built-In** - Configurable delays between requests prevent IP blocking and respect website resources Deduplication on Save** - Google Sheets append-or-update pattern ensures no duplicate URLs in your database Key Benefits of AI-Powered Content Discovery: Save 10+ Hours Weekly** - Automate manual link hunting across dozens of publisher sites Higher Quality Results** - AI filters out 95%+ of junk links (nav pages, categories, footers) that rule-based scrapers miss Scale Effortlessly** - Add new seed URLs to your sheet and the same workflow handles any website structure Industry Agnostic** - Works for news, blogs, research papers, product pages—any content type Always Up-to-Date** - Schedule daily runs to catch new content as it's published Full Audit Trail** - Track discovered URLs with timestamps and sources in Google Sheets Who's it for This template is designed for content marketers, SEO professionals, researchers, media monitors, and anyone who needs to aggregate content from multiple sources. It's perfect for organizations that need to track competitor blogs, curate industry news, build research databases, monitor brand mentions, or aggregate content for newsletters without manually checking dozens of websites daily or writing complex scraping rules for each source. How it works / What it does This workflow creates an intelligent content discovery pipeline that automatically finds and extracts article URLs from any webpage. The system: Reads Seed URLs - Pulls a list of webpages to crawl from your Google Sheets (blog indexes, news feeds, publication homepages) Fetches with Stealth - Downloads each webpage's HTML using browser-like headers to avoid bot detection Converts for AI - Transforms messy HTML into clean Markdown that the AI can easily process AI Extraction - GPT-5-mini analyzes the content and identifies valid article URLs while filtering out navigation, categories, and junk links Normalizes & Saves - Cleans URLs (removes tracking params), deduplicates, and saves to Google Sheets with source tracking Key Innovation: Context-Aware Link Filtering - Unlike traditional scrapers that rely on CSS selectors or URL patterns (which break when sites update), the AI understands the semantic difference between an article link and a navigation link. It reads the page like a human would, identifying content worth following regardless of the website's structure. How to set up 1. Create Your Google Sheets Database Create a new Google Spreadsheet with two sheets: "Seed URLs" - Add column URL with webpages to crawl (blog homepages, news feeds, etc.) "Discovered URLs" - Add columns: URL, Source, Status, Discovered At Add 3-5 seed URLs to start (e.g., https://abc.com/, https://news.xyz.com/) 2. Connect Your Credentials Google Sheets**: Click the "Read Seed URLs" and "Save Discovered URLs" nodes → Select your Google Sheets account OpenAI**: Click the "OpenAI GPT-5-mini" node → Add your OpenAI API key Select your spreadsheet and sheet names in both Google Sheets nodes 3. Customize the AI Prompt (Optional) Open the "AI URL Extractor" node Modify the system message to add industry-specific rules: // Example: Add to system message for tech blogs For tech sites, also extract: Tutorial and guide URLs Product announcement pages Changelog and release notes Adjust source naming conventions to match your taxonomy 4. Test Your Configuration Click "Test Workflow" or use the Manual Trigger Check the execution to verify: Seed URLs are being read correctly HTML is fetched successfully (check for 200 status) AI returns valid JSON array of URLs URLs are saved to your output sheet Review the "Discovered URLs" sheet for results 5. Schedule and Monitor Adjust the Schedule Trigger (default: daily at 6 AM) Enable the workflow to run automatically Monitor execution logs for errors: Rate limiting: Increase wait time if sites block you Empty results: Check if seed URLs have changed structure AI errors: Review AI output in execution data Set up error notifications via email or Slack (add nodes after Completion Summary) Requirements Google Sheets Account** - OAuth2 connection for reading seed URLs and saving results OpenAI API Key** - For GPT-5-mini (or swap for any LangChain-compatible LLM)
by WeblineIndia
Automated Failed Login Detection with Jira Security Tasks, Slack Notifications Webhook: Failed Login Attempts → Jira Security Case → Slack Warnings This n8n workflow monitors failed login attempts from any application, normalizes incoming data, detects repeated attempts within a configurable time window and automatically: Sends detailed alerts to Slack, Creates Jira security tasks (single or grouped based on repetition), Logs all failed login attempts into a Notion database. It ensures fast, structured and automated responses to potential account compromise or brute-force attempts while maintaining persistent records. Quick Implementation Steps Import this JSON workflow into n8n. Connect your application to the failed-login webhook endpoint. Add Jira Cloud API credentials. Add Slack API credentials. Add Notion API credentials and configure the database for storing login attempts. Enable the workflow — done! What It Does Receives Failed Login Data Accepts POST requests containing failed login information. Normalizes the data, ensuring consistent fields: username, ip, timestamp and error. Validates Input Checks for missing username or IP. Sends a Slack alert if any required field is missing. Detects Multiple Attempts Uses a sliding time window (default: 5 minutes) to detect multiple failed login attempts from the same username + IP. Single attempts → standard Jira task + Slack notification. Multiple attempts → grouped Jira task + detailed Slack notification. Logs Attempts in Notion Records all failed login events into a Notion database with fields: Username, IP, Total Attempts, Attempt List, Attempt Type. Formats Slack Alerts Single attempt → lightweight notification. Multiple attempts → summary including timestamps, errors, total attempts, and Jira ticket link. Who’s It For This workflow is ideal for: Security teams monitoring authentication logs. DevOps/SRE teams maintaining infrastructure access logs. SaaS platform teams with high login traffic. Organizations aiming to automate breach detection. Teams using Jira + Slack + Notion + n8n for incident workflows. Requirements n8n (Self-Hosted or Cloud). Your application must POST failed login data to the webhook. Jira Software Cloud credentials (Email, API Token, Domain). Slack Bot Token with message-posting permissions. Notion API credentials with access to a database. Basic understanding of your login event sources. How It Works Webhook Trigger: Workflow starts when a failed-login event is sent to the failed-login webhook. Normalization: Converts single objects or arrays into a uniform format. Ensures username, IP, timestamp and error are present. Prepares a logMessage for Slack and Jira nodes. Validation: IF node checks whether username and IP exist. If missing → Slack alert for missing information. Multiple Attempt Detection: Function node detects repeated login attempts within a 5-minute sliding window. Flags attempts as multiple: true or false. Branching: Multiple attempts → build summary, create Jira ticket, format Slack message, store in Notion. Single attempts → create Jira ticket, format Slack message, store in Notion. Slack Alerts: Single attempt → concise message Multiple attempts → detailed summary with timestamps and Jira ticket link Notion Logging: Stores username, IP, total attempts, attempt list, attempt type in a dedicated database for recordkeeping. How To Set Up Import Workflow → Workflows → Import from File in n8n. Webhook Setup → copy the URL from Faield Login Trigger node and integrate with your application. Jira Credentials → connect your Jira account to both Jira nodes and configure project/issue type. Slack Credentials → connect your Slack Bot and select the alert channel. Notion Credentials → connect your Notion account and select the database for storing login attempts. Test the Workflow → send sample events: missing fields, single attempts, multiple attempts. Enable Workflow → turn on workflow once testing passes. Logic Overview | Step Node | Description | |---------------------------------|-----------------------------------------------| | Normalize input | Normalize Login Event — Ensures each event has required fields and prepares a logMessage. | | Validate fields | Check Username & IP present — IF node → alerts Slack if data is incomplete. | | Detect repeats | Detect Multiple Attempts — Finds multiple attempts within a 5-minute window; sets multiple flag. | | Multiple attempts | IF - Multiple Attempts + Build Multi-Attempt Summary — Prepares grouped summary for Slack & Jira. | | Single attempt | Create Ticket - Single Attempt — Creates Jira task & Slack alert for one-off events. | | Multiple attempt ticket | Create Ticket - Multiple Attempts — Creates detailed Jira task. | | Slack alert formatting | Format Fields For Single/Multiple Attempt — Prepares structured message for Slack. | | Slack alert delivery | Slack Alert - Single/Multiple Attempts — Posts alert in selected Slack channel. | | Notion logging | Login Attempts Data Store in DB — Stores structured attempt data in Notion database. | Customization Options Webhook Node** → adjust endpoint path for your application. Normalization Function** → add fields such as device, OS, location or user-agent. Multiple Attempt Logic** → change the sliding window duration or repetition threshold. Jira Nodes** → modify issue type, labels or project. Slack Nodes** → adjust markdown formatting, channel routing or severity-based channels. Notion Node** → add or modify database fields to store additional context. Optional Enhancements: Geo-IP lookup for country/city info. Automatic IP blocking via firewall or WAF. User notification for suspicious login attempts. Database logging in MySQL/Postgres/MongoDB. Threat intelligence enrichment (e.g., AbuseIPDB). Use Case Examples Detect brute-force attacks targeting user accounts. Identify credential stuffing across multiple users. Monitor admin portal access failures with Jira task creation. Alert security teams instantly when login attempts originate from unusual locations. Centralize failed login monitoring across multiple applications with Notion logging. Troubleshooting Guide | Issue | Possible Cause | Solution | |-------------------------------|---------------------------------------------------|-------------------------------------------------------------| | Workflow not receiving data | Webhook misconfigured | Verify webhook URL & POST payload format | | Jira ticket creation fails | Invalid credentials or insufficient permissions | Update Jira API token and project access | | Slack alert not sent | Incorrect channel ID or missing bot scopes | Fix Slack credentials and permissions | | Multiple attempts not detected| Sliding window logic misaligned | Adjust Detect Multiple Attempts node code | | Notion logging fails | Incorrect database ID or missing credentials | Update Notion node credentials and database configuration | | Errors in normalization | Payload format mismatch | Update Normalize Login Event function code | Need Help? If you need help setting up, customizing or extending this workflow, WeblineIndia can assist with full n8n development, workflow automation, security event processing and custom integrations.
by Khairul Muhtadin
Automate your B2B lead generation by scraping Google Maps, generating AI business summaries, and extracting hidden contact emails from websites, all triggered via Telegram. Why Use This Workflow? Time Savings: Reduces lead research time from 4 hours of manual searching to 5 minutes of automated execution per 50 leads. Cost Reduction: Replaces expensive monthly lead database subscriptions with a pay-as-you-go model using Apify and OpenAI. Error Prevention: Uses AI to deduplicate results and ensure company summaries are professional and consistent for your CRM. Scalability: Allows you to trigger massive scraping tasks from your mobile phone via Telegram while the backend handles the heavy lifting. Ideal For Sales Development Reps (SDRs):** Rapidly building targeted lists of local businesses for cold outreach or door-knocking campaigns. Marketing Agencies:** Identifying new businesses in specific sectors (e.g., "Dentists in Paris") to offer SEO or advertising services. Real Estate Investors:** Finding specific commercial properties or business types in a geographic area to identify investment opportunities. How It Works Trigger: The workflow starts when you send a message to your Telegram bot in the format: Sector; Limit; MapsURL. Data Collection: n8n parses these parameters and triggers an Apify Actor to scrape Google Maps for business details. Processing: The workflow retrieves the results, removes duplicate entries, and begins a loop to process each business. Intelligence Layer (Summary): OpenAI analyzes the business data to create a natural, human-like summary of the company. Enrichment (Email Hunting): If the business has a website, Jina AI fetches the page content, and OpenAI extracts the most authoritative contact email. Output & Delivery: Core data is upserted into Google Sheets, and the email is updated once found. A "DONE" notification is sent to Telegram upon completion. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution and logic | | Apify Account | Essential | Scrapes Google Maps data | | OpenAI API Key | Essential | Generates summaries and extracts emails | | Google Sheets | Essential | Data storage and lead management | | Telegram Bot | Essential | User interface for triggering searches | | Jina AI Key | Essential | Converts websites to LLM-friendly Markdown | Installation Steps Import the JSON file into your n8n instance. Configure credentials: Telegram: Create a bot via @BotFather and add your token to the Telegram nodes. Apify: Provide your API token to the "Run Maps Scraper" and "Fetch Dataset" nodes. OpenAI: Add your API key to the enrichment nodes. Google Sheets: Connect your Google account and select your target Spreadsheet ID. Set up the Sheet: Ensure your Google Sheet has headers matching the "Upsert" node (Title, Email, Category, Website, etc.). Test execution: Send a message like Coffee Shops; 5; https://www.google.com/maps/search/coffee+shops+london to your bot. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | Telegram Trigger | Captures user input | Listens for /message updates | | Apify Node | Executes Maps Scraper | Uses nwua9Gu5YrADL7ZDj actor ID | | OpenAI Node | AI Analysis | Configured for JSON output (Summary) and extraction (Email) | | Jina AI | Web Scraping | Converts HTML to Markdown for AI readability | | Google Sheets | Database | Uses Append or Update based on the business Title | Workflow Logic The workflow utilizes a Split In Batches loop to ensure stability. It first performs a "shallow" save of business details (name, phone, address) and then attempts a "deep" enrichment only if a website URL is detected. This two-stage approach ensures you don't lose data if a website crawl fails. Customization Options Basic Adjustments: Rate Limit:** Adjust the "Wait" node duration (default 2s) to comply with Google Sheets or API limits. Language:** Modify the OpenAI prompt to generate summaries in your preferred language (e.g., English, French, Spanish). Advanced Enhancements: CRM Integration:** Replace Google Sheets with HubSpot or Pipedrive for direct lead injection. Slack Notifications:** Send the final lead list or business summary directly to a sales channel. Auto-Emailer:** Add a Gmail/Outlook node to automatically send an introductory email once a lead is found. Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | Empty Email column | Website protected by bot-blockers | Try a different scraper or use a proxy in Jina AI | | Apify Timeout | Scraping limit set too high | Lower the "limit" parameter in your Telegram message | | 429 Errors | Google Sheets rate limits | Increase the duration in the "Wait Rate Limit" node | Use Case Examples Scenario 1: Local SEO Agency Challenge: Finding local contractors with poor reviews but high revenue potential. Solution: Use the workflow to scrape "Plumbers" in a city, use AI to summarize their online presence, and collect emails. Result: A curated list of 100 leads with contact info and a "Pitch" summary generated in minutes. Scenario 2: SaaS Cold Outreach Challenge: Getting the direct email of a business owner for a new booking software. Solution: Trigger the scraper via Telegram while in the field. The workflow extracts the "authoritative" email (manager/owner) from their site. Result: Accurate, high-intent lead data delivered directly to a master spreadsheet for the sales team. Created by: Khaisa Studio Category: Marketing | Tags: Lead Gen, AI, Google Maps, Telegram Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by Aslamul Fikri Alfirdausi
This n8n template demonstrates how to build O'Carla, an advanced all-in-one Discord AI assistant. It intelligently handles natural conversations, professional image generation, and visual file analysis within a single server integration. Use cases are many: Deploy a smart community manager that remembers past interactions, an on-demand artistic tool for your members, or an AI that can "read" and explain uploaded documents and images! Good to know API Costs:** Each interaction costs vary depending on the model used (Gemini vs. OpenRouter). Check your provider's dashboard for updated pricing. Infrastructure:* This workflow requires a separate Discord bot script (e.g., Node.js) to forward events to the n8n Webhook. It is recommended to host the bot using *PM2** for 24/7 uptime. How it works Webhook Trigger: Receives incoming data (text and attachments) from your Discord bot. Intent Routing: The workflow uses conditional logic to detect if the user wants an image (via keyword gambar:), a vision analysis (via attachments), or a standard chat. Multi-Model Intelligence: Gemini 2.5: Powers rapid and high-quality general chat reasoning. Llama 3.2 Vision (via OpenRouter): Specifically used to describe and analyze images or text-based files. Flux (via Pollinations): Uses a specialized AI Agent to refine prompts and generate professional-grade images. Contextual Memory: A 50-message buffer window ensures O'Carla maintains the context of your conversation based on your Discord User ID. Clean UI Output: Generated image links are automatically shortened via TinyURL to keep the Discord chat interface tidy. How to use Connect your Google Gemini and OpenRouter API keys in the respective nodes. Replace the Webhook URL in your bot script with this workflow's Production Webhook URL. Type gambar: [your prompt] in Discord to generate images. Upload an image or file to Discord to trigger the AI Vision analysis. Requirements n8n instance (Self-hosted or Cloud). Google Gemini API Key. OpenRouter API Key. Discord Bot Token and hosting environment. Customising this workflow O'Carla is highly flexible. You can change her personality by modifying the System Message in the Agent nodes, adjust the memory window length, or swap the LLM models to specialized ones like Claude 3.5 or GPT-4o.