by Romain Jouhannet
📝 Release Note Helper Triggered by a GitLab MR webhook, this workflow automatically assists your team in writing customer-facing release notes by combining Linear issue data with Claude AI. Apply the rn-release-n8n label to any release note MR in your docs repository to trigger it. How it works Version detection — reads your release RSS feed to find the last published version, then fetches all matching Linear version labels created since then to determine the version range automatically Issue collection — queries Linear for all completed issues in that version range that have Zendesk tickets, Slack links, or custom labels (Customer request, Release note public) attached Ticket summary — posts a structured list of all relevant issues to the MR as a comment AI draft — sends issue details to Claude, which generates customer-facing changelog entries grouped into ### Enhancements and ### Fixes, posted as a second MR comment Done label — adds rn-done to the MR when complete to prevent re-runs Setup Configure a GitLab webhook on your docs repo pointing to this workflow's URL (Merge Request events) Create two labels on your GitLab repo: rn-release-n8n (to trigger) and rn-done (auto-applied on completion) Update the RSS Read node URL to your release RSS feed Replace YOUR_PROJECT_ID in all GitLab API nodes with your docs project ID Replace YOUR_WORKSPACE in the Code nodes with your Linear workspace slug Connect Linear API, GitLab API, and Anthropic API credentials Notes Versioning assumes a vX.Y Linear label convention — adapt the Format labels node for your own scheme The AI prompt in Message a model is ready to use but can be customised to match your tone and changelog format Issues are filtered to those with Zendesk, Slack attachments, or your custom labels — adjust in Set Params
by as311
This workflow generates a data driven Ideal Customer Profile (ICP) and retrieves lookalike companies in Germany from the official data source (Handelsregister). It starts by ingesting a set of base company IDs, serializes them, and sends a recommendation request to the Implisense API to fetch similar companies. When explanation mode is enabled, the workflow extracts and processes term features to create a structured keyword digest and uses an LLM to generate an ICP narrative. The pipeline outputs both a clean list of lookalike companies, enriched with CRM-ready fields, and a detailed ICP report derived from Implisense feature statistics. How it works Input → Serialization → Lookalikes → Lists/Report Setup steps 1. Data Source ☐ Replace "Mock ICP Companies" with matched companies from the Implisense database ☐ Ensure output has: id 2. Configure Credentials: Set up RapidAPI API credentials Get your API key here: https://implisense.com/de/contact Insert your API Token in get_lookalikes (Basic auth) 3. Configure ICP Filters ☐ Edit "Build Recommendation Request" node ☐ Set locationsFilter (e.g., de-be, de-by, de-nw) ☐ Set industriesFilter (NACE codes, e.g., J62 for IT) ☐ Set sizesFilter (MICRO, SMALL, MEDIUM, LARGE) 4. Tune Results ☐ Adjust THRESHOLD in "Filter & Normalize Results" (default: 0.5) ☐ Adjust MIN_BASE_COMPANIES in "Collect Base Companies" (default: 3) ☐ Adjust size parameter in "Configuration" URL (default: 100) 5. CRM Integration ☐ Map fields in "list_of_companies" to match your CRM schema ☐ Add CRM upsert node after "list_of_companies" ☐ Use implisense-ID or domain as unique identifier Additional advice Strengthen Base Company Quality Use only highly representative base companies located in Germany that strongly match the intended ICP segment. Templates with dozens of mixed or heterogeneous IDs dilute the statistical signal in the /recommend endpoint and reduce relevance. Refine Filters Aggressively Limit recommendations by state, region, NACE code, or size class. Implisense returns cleaner results when the recommendation space is constrained. Removing unnecessary geography broadens noise. Increase the Size Parameter Raise the size parameter when building the request to give the ranking model more candidates. This materially improves downstream sorting and selection.
by Port IO
Complete security workflow from vulnerability detection to automated remediation, with severity-based routing and full organizational context from Port's catalog. This template provides end-to-end lifecycle management including automatic Jira ticket creation with appropriate priority, AI-powered remediation planning, and Claude Code-triggered fixes for critical vulnerabilities. The full guide is available here. How it works The n8n workflow orchestrates the following steps: Webhook trigger**: Receives vulnerability alerts from security scanners (Snyk, Wiz, SonarQube, etc.) via POST request. Port context enrichment**: Uses Port's n8n node to query your software catalog for service metadata, ownership, environment, SLA requirements, and dependencies related to the vulnerability. AI remediation planning**: OpenAI analyzes the vulnerability with Port context and generates a remediation plan, determining if automated fixing is possible. Severity-based routing**: Routes vulnerabilities through different paths based on severity level: Critical: Jira ticket (Highest priority) → Check if auto-fixable → Trigger Claude Code fix → Slack alert with fix status High: Jira ticket (High priority) → Slack notification to team channel Medium/Low: Jira ticket only for tracking Jira integration**: Creates tickets with full context including vulnerability details, affected service information from Port, and AI-generated remediation steps. Claude Code remediation**: For auto-fixable critical vulnerabilities, triggers Claude Code via Port action to create a pull request with the security patch, referencing the Jira ticket. Slack notifications**: Sends contextual alerts to the appropriate team channel (retrieved from Port) with Jira ticket reference and remediation status. Prerequisites You have a Port account and have completed the onboarding process. Services and repositories are cataloged in Port with ownership information. Your security scanner (Snyk, Wiz, SonarQube) can send webhooks. You have a working n8n instance (Cloud or self-hosted) with Port's n8n custom node installed. Jira Cloud account with appropriate project permissions. Slack workspace with bot permissions to post messages. OpenAI API key for remediation planning. Setup Register for free on Port.io if you haven't already. Create the Context Retriever Agent in Port following the guide. Import the workflow and configure credentials (Port, Jira, Slack, OpenAI, Bearer Auth). Select your Jira project in each Jira node (Critical, High, Medium/Low). Update default-organization/repository with your default repository for Claude Code fixes. Point your security scanner webhook to the workflow URL. Test with a sample vulnerability payload. ⚠️ This template is intended for Self-Hosted instances only.
by Jannik Hiller
This n8n workflow is a sophisticated B2B Lead Generation Scraper. It automates the entire journey from discovering businesses on Google Maps to extracting, scoring, and saving high-quality contact emails. Here is a breakdown of the workflow stages: Stage 1: Google Maps Search & Pagination The workflow starts with a list of search queries (e.g., "Dentist New York"). Looping: It processes each query one by one. Smart Pagination: Google Maps usually limits results per page. This workflow detects the nextPageToken and automatically re-calls the API until all available businesses for that query are collected. Filtering: It immediately filters out closed businesses, keeping only those marked as "OPERATIONAL". Stage 2: Deep Web Scraping For every business found with a website, the workflow performs a two-step crawl: Homepage Fetch: It visits the main URL to find immediate contact info. Contact Page Discovery: A code node scans the homepage for links containing keywords like "Contact", "About", "Team", or "Impressum". It then visits these specific sub-pages to find hidden emails. Stage 3: Email Quality Control & Scoring This is the most advanced part of the logic. Instead of just grabbing any email, it uses a Scoring System to rank them: The Filter: It removes technical or junk emails (e.g., sentry@, noreply@, or image files disguised as emails). The Scorecard: +30 Points: Domain match (e.g., info@company.com matches www.company.com). +20 Points: Personal touch (detects dots or names like john.doe@). -40 Points: Generic prefixes (penalizes info@, admin@, sales@). -25 Points: Free providers (penalizes @gmail.com, @yahoo.com). Selection: It sorts all found emails by score and keeps only the top-ranked email for each business. Stage 4: Final Output Deduplication: It ensures no duplicate businesses are added to your list, even if they appeared in multiple search queries. Data Storage: The final, cleaned data—including Business Name, Address, Phone, Website, and the Best Email—is appended to a Google Sheet.
by Connor Provines
Schedule appointments from phone calls with AI using Twilio and ElevenLabs This n8n template creates an intelligent phone receptionist that handles incoming calls, answers FAQs, and schedules appointments to Google Calendar. The system uses Twilio for phone handling, ElevenLabs for voice AI and basic conversations, and n8n for complex scheduling logic—keeping responses snappy by only invoking the workflow when calendar operations are needed. Who's it for Businesses that need automated phone scheduling: service companies, clinics, consultants, or any business that takes appointments by phone. Perfect for reducing administrative overhead while maintaining a professional caller experience. Good to know Redis memory is essential—without it, the AI must reparse entire conversations causing severe lag in voice responses Claude 3.5 Sonnet is recommended for best scheduling results Typical response times: ElevenLabs-only responses <1s, n8n tool calls 2-4s All placeholder values must be customized or scheduling will fail How it works Twilio receives incoming calls and forwards to ElevenLabs voice AI ElevenLabs handles casual conversation and FAQ responses instantly When calendar operations are needed, ElevenLabs calls your n8n webhook n8n checks Google Calendar availability using your business rules Claude AI agent processes the request, collects required information, and schedules appointments Redis maintains conversation context across the call Calendar invites are automatically sent to customers How to set up Connect Twilio to ElevenLabs: In Twilio Console, set your phone number webhook to your ElevenLabs agent URL Configure ElevenLabs tools: Add "Client Tools" in ElevenLabs that point to your n8n webhook for checking availability, creating appointments, and updating appointments Set n8n webhook path: Replace REPLACE ME in the "Webhook: Receive User Request" node with a secure endpoint (e.g., /elevenlabs-voice-scheduler) Configure Google Calendar: Replace all REPLACE ME instances with your Calendar ID in the three calendar nodes (Check Availability, Create Appointment, Update Event) Set up Redis: Configure connection details in the "Redis Chat Memory" node Customize scheduling prompt: In the "Voice AI Agent" node, replace all bracketed placeholders with your business details: [TIMEZONE], [START_TIME], [END_TIME], [OPERATING_DAYS], [BLOCKED_DAYS] [MINIMUM_LEAD_TIME], [APPOINTMENT_DURATION], [SERVICE_TYPE] [REQUIRED_FIELDS], [REQUIRED_NOTES_FIELDS] Test: Make a test call to verify availability checking, information collection, and appointment creation Requirements Twilio account with phone number ElevenLabs Conversational AI account Google Calendar with OAuth2 credentials Redis instance (for session management) Anthropic API key (for Claude AI)
by Nitesh
🤖 Instagram DM Automation Workflow Category: Marketing & Lead Engagement Tags: Instagram, Puppeteer, Automation, Google Sheets, Lead Nurturing 🧠 Overview This workflow automates Instagram DMs, engagement, and story interactions using Puppeteer in the backend. It connects to Google Sheets to fetch leads (usernames and messages) and sends personalized DMs one by one — while also mimicking human behavior by scrolling, liking posts, and viewing stories. It’s designed to help marketers and businesses capture, nurture, and convert leads on Instagram — fully automated and AI-assisted. ⚙️ How It Works 1. Fetch Leads from Google Sheets 2. Send Instagram DMs via Puppeteer Backend 3. Simulate Human Actions 4. Update Lead Status 5. Rate Limit Handling 🧭 Setup Steps > ⏱️ Estimated setup time: ~10–15 minutes 1. Prerequisites Active Google Sheets API connection with OAuth2 credentials. Puppeteer-based backend running locally or remotely. Node.js-based service handling: /login /instagram /viewstory /logthis 2. Connect Google Sheets Use your Google account to authorize Google Sheets access. Add your Sheet ID in: leads → for usernames & messages. acc → for active accounts tracking. 3. Configure Webhook Copy your Webhook URL from n8n. Use it to trigger the workflow manually or via external API. 4. Adjust Timing Edit Code in JavaScript nodes if you want to: Change DM delay (20–30s default) Adjust story viewing delay (4.5–5.5 minutes) 5. Test Before Deploy Run in test mode with 1–2 sample leads. Check that: DM is sent. Google Sheet updates status. Backend logs actions. 🧾 Notes Inside the Workflow You’ll find Sticky Notes within the workflow for detailed guidance, covering: ✅ Setup sequence 💬 Message sending logic ⏳ Delay handling 📊 Google Sheets updates ⚠️ Rate-limit prevention 🔁 Loop control and retry mechanism 🚀 Use Cases — ⚙️ Automate lead nurturing via Instagram DMs. 🤖 Send AI-personalized messages to prospects. 👥 Simulate real human actions (scroll, like, view stories). 🔥 Safely warm up new accounts with timed delays. 📊 Auto-update Google Sheets with DM status & timestamps. 💬 Run outbound messaging campaigns hands-free. 🧱 Handle rate limits smartly and continue smoothly. 🚀 Boost engagement, replies, and conversions with automation.
by Incrementors
Description: Automatically extracts all page URLs from website sitemaps, filters out unwanted sitemap links, and saves clean URLs to Google Sheets for SEO analysis and reporting. How It Works: This workflow automates the process of discovering and extracting all page URLs from a website's sitemap structure. Here's how it works step-by-step: Step 1: URL Input The workflow starts when you submit a website URL through a simple form interface. Step 2: Sitemap Discovery The system automatically generates and tests multiple possible sitemap URLs including /sitemap.xml, /sitemap_index.xml, /robots.txt, and other common variations. Step 3: Valid Sitemap Identification It sends HTTP requests to each potential sitemap URL and filters out empty or invalid responses, keeping only accessible sitemaps. Step 4: Nested Sitemap Processing For sitemap index files, the workflow extracts all nested sitemap URLs and processes each one individually to ensure complete coverage. Step 5: Page URL Extraction From each valid sitemap, it parses the XML content and extracts all individual page URLs using both XML <loc> tags and HTML links. Step 6: URL Filtering The system removes any URLs containing "sitemap" to ensure only actual content pages (like product, service, or blog pages) are retained. Step 7: Google Sheets Integration Finally, all clean page URLs are automatically saved to a Google Sheets document with duplicate prevention for easy analysis and reporting. Setup Steps: Estimated Setup Time: 10-15 minutes 1. Import the Workflow: Import the provided JSON file into your n8n instance. 2. Configure Google Sheets Integration: Set up Google Sheets OAuth2 credentials in n8n Create a new Google Sheet or use an existing one Update the "Save Page URLs to Sheet" node with your Google Sheet URL Ensure your sheet has a tab named "Your sheet tab name" with a column header "Column name" 3. Test the Workflow: Activate the workflow in n8n Use the form trigger URL to submit a test website URL Verify that URLs are being extracted and saved to your Google Sheet 4. Customize (Optional): Modify the sitemap URL patterns in the "Build sitemap URLs" node if needed Adjust the filtering criteria in the "Exclude the Sitemap URLs" node Update the Google Sheets column mapping as required Important Notes: Ensure your Google Sheets credentials have proper read/write permissions The workflow handles both XML sitemaps and robots.txt sitemap references Duplicate URLs are automatically prevented when saving to Google Sheets The workflow continues processing even if some sitemap URLs are inaccessible Need Help? For technical support or questions about this workflow: ✉️ info@incrementors.com or fill out this form: Contact Us
by Daniel Turgeman
How it works A webhook receives form submissions with an email address The email is validated and checked against HubSpot for duplicates Lusha enriches the lead with phone number, job title, seniority, and company data Enriched data is merged with form fields and upserted into HubSpot CRM An SDR is alerted on Slack and the webhook returns a JSON response Set up steps Install the Lusha community node (@lusha-org/n8n-nodes-lusha) Add your Lusha API, HubSpot OAuth2, and Slack credentials Point your form's action URL to the webhook endpoint Configure the Slack channel for SDR alerts
by Cliss Zhang
AI-Powered Business Lead Scraping, Qualification & Outreach System Description Search → Scrape → Qualify → CRM → Email Draft Automation Categories: Lead Generation, Sales Automation, AI Enrichment, Revenue Ops This workflow automatically finds local businesses, extracts real contact details from their websites, qualifies them, and writes everything into a CRM — with personalized cold email drafts ready to send. It’s designed to remove the manual grind from lead sourcing and first-touch outreach. Search → leads → context → drafts → done. What This Workflow Does This automation takes raw local business results and turns them into usable, qualified leads: Pulls local business websites from a search dataset Scrapes each site for real contact information Normalizes emails, phones, names, and addresses Qualifies leads based on reachability and ops signals Writes clean, deduplicated records into a CRM Generates human-sounding cold email drafts No copying websites. No guessing emails. No messy spreadsheets. Why This Exists Most lead gen systems fail before outreach even starts. They rely on: Shallow scraped data Guessy enrichment Low-quality lists Manual cleanup This system fixes that by grounding everything in what actually exists on the business website, then using AI only where it makes sense. Human judgment at the edges. Automation in the middle. How It Works (High Level) 1. Lead Source Ingestion (Apify Dataset) The workflow starts with a dataset of local business search results. This can be: Google search results Industry-specific directories Any Apify-powered source that includes URLs Batch size is intentionally limited for safety. 2. Website Scraping Each business website is fetched and stripped down to raw text. Failures are allowed — broken sites simply don’t qualify later. The raw content becomes the single source of truth. 3. AI Contact Extraction & Normalization AI parses the site content to extract: Emails and phones Company name and address Contact people and titles Social links and contact pages Context snippets for traceability Everything is normalized and returned as strict JSON. If something isn’t clearly present, it stays empty. 4. Lead Qualification Leads are scored based on: Reachability (email + website) Basic operational signals Optional social presence Low-quality or unreachable leads are filtered out automatically. 5. CRM Write (Google Sheets) Qualified leads are written into a lightweight CRM: Append-or-update by email Safe to re-run Easy to inspect and debug This sheet becomes the system of record. 6. Cold Email Draft Generation For each qualified lead, AI generates a personalized cold email draft: Casual, human tone Uses real site context Stored as drafts only Never auto-sent Perfect for review, sequencing, or export into an outreach tool. Tools Used n8n** — workflow orchestration Apify** — lead sourcing OpenAI** — extraction, qualification, email drafting Google Sheets** — lightweight CRM Hunter** — email verification Tavily** — optional enrichment & validation Who This Is For Automation and AI agencies Consultants doing outbound Freelancers selling repeatable services Local-service lead gen operators Anyone tired of low-quality scraped lists Customization Notes Swap Google Sheets for Airtable, HubSpot, or Notion Adjust qualification thresholds to control lead volume Replace Apify source with any directory or search dataset Plug drafts into any outbound sequencing tool Extend metadata for analytics or CRM sync Difficulty & Cost Difficulty: Intermediate (Simple concept, careful execution) Estimated setup time: 30–45 minutes Ongoing cost: OpenAI + Apify + verification APIs only Summary This is not just a scraper. It’s a lead intelligence pipeline that turns raw search results into real, usable outbound opportunities. Search → scrape → qualify → CRM → drafts No guessing. No junk leads. No manual cleanup.
by Moe Ahad
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works Using chat node, ask a question pertaining to information stored in your MySQL database AI Agent converts your question to a SQL query AI Agent executes the SQL query and returns a result AI Agent can remember the previous 5 questions How to set up: Add your OpenAI API Key in "OpenAI Chat Model" node Add your MySQL credentials in the "SQL DB - List Tables and Schema" and "Execute a SQL Query in MySQL nodes" Update the database name in "SQL DB - List Tables and Schema" node. Replace "your_query_name" under the Query field with your actual database name After the above steps are completed, use the "When chat message received" node to ask a question about your data using plain English
by Stephan Koning
WhatsApp Micro-CRM with Baserow & WasenderAPI Struggling to manage WhatsApp client communications? This n8n workflow isn't just automation; it's your centralized CRM solution for small businesses and freelancers. How it works Capture Every Message:** Integrates WhatsApp messages directly via WasenderAPI. Effortless Contact Management:** Automates contact data standardization and intelligently manages records (creating new or updating existing profiles). Rich Client Profiles:** Retrieves profile pictures and decrypts image media, giving you full context. Unified Data Hub:** Centralizes all conversations and media in Baserow, no more scattered interactions. Setup Steps Setup is incredibly fast; you can deploy this in under 15 minutes. Here's what you'll do: Link WasenderAPI:** Connect your WasenderAPI webhooks directly to n8n. Set up Baserow:** Duplicate our pre-built 'Contacts' (link) and 'Messages' (link) Baserow table templates. Secure Your Data:** Input your API credentials (WasenderAPI and Baserow) directly into n8n. Every single step is fully detailed in the workflow's sticky notes – we've made it foolproof. Requirements What do you need to get started? An active n8n instance (self-hosted or cloud). A WasenderAPI.com subscription or trial. A Baserow account. Note: Keep the flow layout as is! This will ensure that the flow is running in the correct order.
by Joe Swink
This workflow is a simple example of using n8n as an AI chat interface into Appian. It connects a local LLM, persistent memory, and API tools to demonstrate how an agent can interact with Appian tasks. What this workflow does Chat interface: Accepts user input through a webhook or chat trigger Local LLM (Ollama): Runs on qwen2.5:7b with an 8k context window Conversation memory: Stores chat history in Postgres, keyed by sessionId AI Agent node: Handles reasoning, follows system rules (helpful assistant persona, date formatting, iteration limits), and decides when to call tools Appian integration tools: List Tasks: Fetches a user’s tasks from Appian Create Task: Submits data for a new task in Appian (title, description, hours, cost) How it works A user sends a chat message The workflow normalizes fields such as text, username, and sessionId The AI Agent processes the message using Ollama and Postgres memory If the user asks about tasks, the agent calls the Appian APIs The result, either a task list or confirmation of a new task, is returned through the webhook Why this is useful Demonstrates how to build a basic Appian connector in n8n with an AI chat front end Shows how an LLM can decide when to call Appian APIs to list or create tasks Provides a pattern that can be extended with more Appian endpoints, different models, or custom system prompts