by BHSoft
📌Who is this for? This workflow is designed for engineering teams, project managers, and IT operations who need consistent visibility into team availability across multiple projects. It’s perfect for organizations that use Odoo for leave management and Redmine for project collaboration, and want to ensure that everyone involved gets timely, automated Slack notifications whenever a team member will be absent the next day. 📌The problem When team members go dark, everything grinds to a halt. You're stuck with: Last-minute meeting reschedules (and frustrated stakeholders) Tasks assigned to people who aren't there No time to redistribute workload Bottlenecks affecting multiple projects 📌How it works Runs daily at 17:15 - Set it and forget it. Executes every afternoon, giving teams time to prepare. Fetches Tomorrow's Approved Leaves from Odoo - Pulls all leave records with tomorrow's start date and "approved" status. Maps Employee & Project Data - Grabs the employee's details and identifies every Redmine project they're assigned to. Finds All Teammates on the Same Projects - Deduplicates across overlapping projects to avoid notification spam. Sends Targeted Slack Notifications - Only notifies people who actually work with the absent member, plus optional manager alerts. 📌Quick setup Before you start, you’ll need: Odoo API key Redmine API key Slack Bot Token (or Incoming Webhook URL) Subflows need to be created within a new flow; the main flow will call these subflows. 📌Results What changes immediately: Zero surprises - teams know absences 24 hours ahead Workload rebalancing happens before the person goes off Managers make proactive decisions, not reactive ones No more wasted Slack messages to irrelevant people This creates a more predictable and transparent workflow across your engineering and project teams. 📌Take it further Ready to supercharge it? Add: Auto-assign backup owners for critical tasks Sync absences to Google Calendar/Outlook Log notifications to a database for auditing Conditional alerts (key roles, high-priority projects only) Daily summary digest of all upcoming absences 📌Need help customizing? Contact me for consulting and support: Linkedin / Website
by Connor Provines
Schedule appointments from phone calls with AI using Twilio and ElevenLabs This n8n template creates an intelligent phone receptionist that handles incoming calls, answers FAQs, and schedules appointments to Google Calendar. The system uses Twilio for phone handling, ElevenLabs for voice AI and basic conversations, and n8n for complex scheduling logic—keeping responses snappy by only invoking the workflow when calendar operations are needed. Who's it for Businesses that need automated phone scheduling: service companies, clinics, consultants, or any business that takes appointments by phone. Perfect for reducing administrative overhead while maintaining a professional caller experience. Good to know Redis memory is essential—without it, the AI must reparse entire conversations causing severe lag in voice responses Claude 3.5 Sonnet is recommended for best scheduling results Typical response times: ElevenLabs-only responses <1s, n8n tool calls 2-4s All placeholder values must be customized or scheduling will fail How it works Twilio receives incoming calls and forwards to ElevenLabs voice AI ElevenLabs handles casual conversation and FAQ responses instantly When calendar operations are needed, ElevenLabs calls your n8n webhook n8n checks Google Calendar availability using your business rules Claude AI agent processes the request, collects required information, and schedules appointments Redis maintains conversation context across the call Calendar invites are automatically sent to customers How to set up Connect Twilio to ElevenLabs: In Twilio Console, set your phone number webhook to your ElevenLabs agent URL Configure ElevenLabs tools: Add "Client Tools" in ElevenLabs that point to your n8n webhook for checking availability, creating appointments, and updating appointments Set n8n webhook path: Replace REPLACE ME in the "Webhook: Receive User Request" node with a secure endpoint (e.g., /elevenlabs-voice-scheduler) Configure Google Calendar: Replace all REPLACE ME instances with your Calendar ID in the three calendar nodes (Check Availability, Create Appointment, Update Event) Set up Redis: Configure connection details in the "Redis Chat Memory" node Customize scheduling prompt: In the "Voice AI Agent" node, replace all bracketed placeholders with your business details: [TIMEZONE], [START_TIME], [END_TIME], [OPERATING_DAYS], [BLOCKED_DAYS] [MINIMUM_LEAD_TIME], [APPOINTMENT_DURATION], [SERVICE_TYPE] [REQUIRED_FIELDS], [REQUIRED_NOTES_FIELDS] Test: Make a test call to verify availability checking, information collection, and appointment creation Requirements Twilio account with phone number ElevenLabs Conversational AI account Google Calendar with OAuth2 credentials Redis instance (for session management) Anthropic API key (for Claude AI)
by Nitesh
🤖 Instagram DM Automation Workflow Category: Marketing & Lead Engagement Tags: Instagram, Puppeteer, Automation, Google Sheets, Lead Nurturing 🧠 Overview This workflow automates Instagram DMs, engagement, and story interactions using Puppeteer in the backend. It connects to Google Sheets to fetch leads (usernames and messages) and sends personalized DMs one by one — while also mimicking human behavior by scrolling, liking posts, and viewing stories. It’s designed to help marketers and businesses capture, nurture, and convert leads on Instagram — fully automated and AI-assisted. ⚙️ How It Works 1. Fetch Leads from Google Sheets 2. Send Instagram DMs via Puppeteer Backend 3. Simulate Human Actions 4. Update Lead Status 5. Rate Limit Handling 🧭 Setup Steps > ⏱️ Estimated setup time: ~10–15 minutes 1. Prerequisites Active Google Sheets API connection with OAuth2 credentials. Puppeteer-based backend running locally or remotely. Node.js-based service handling: /login /instagram /viewstory /logthis 2. Connect Google Sheets Use your Google account to authorize Google Sheets access. Add your Sheet ID in: leads → for usernames & messages. acc → for active accounts tracking. 3. Configure Webhook Copy your Webhook URL from n8n. Use it to trigger the workflow manually or via external API. 4. Adjust Timing Edit Code in JavaScript nodes if you want to: Change DM delay (20–30s default) Adjust story viewing delay (4.5–5.5 minutes) 5. Test Before Deploy Run in test mode with 1–2 sample leads. Check that: DM is sent. Google Sheet updates status. Backend logs actions. 🧾 Notes Inside the Workflow You’ll find Sticky Notes within the workflow for detailed guidance, covering: ✅ Setup sequence 💬 Message sending logic ⏳ Delay handling 📊 Google Sheets updates ⚠️ Rate-limit prevention 🔁 Loop control and retry mechanism 🚀 Use Cases — ⚙️ Automate lead nurturing via Instagram DMs. 🤖 Send AI-personalized messages to prospects. 👥 Simulate real human actions (scroll, like, view stories). 🔥 Safely warm up new accounts with timed delays. 📊 Auto-update Google Sheets with DM status & timestamps. 💬 Run outbound messaging campaigns hands-free. 🧱 Handle rate limits smartly and continue smoothly. 🚀 Boost engagement, replies, and conversions with automation.
by Jay Emp0
This workflow automatically backs up all public Postgres tables into a GitHub repository as CSV files every 24 hours. It ensures your database snapshots are always up to date updating existing files if data changes, or creating new backups for new tables. How it works: Schedule Trigger – Runs daily to start the backup process. GitHub Integration – Lists existing files in the target repo to avoid duplicates. Postgres Query – Fetches all table names from the public schema. Data Extraction – Selects all rows from each table. Convert to CSV – Saves table data as CSV files. Conditional Upload – If the table already exists in GitHub → Update the file. If new → Upload a new file. Postgres Tables Preview GitHub Backup Preview Use case: Perfect for developers, analysts, or data engineers who want daily automated backups of Postgres data without manual exports keeping both history and version control in GitHub. Requirements: Postgres credentials with read access. GitHub repository (OAuth2 connected in n8n).
by as311
This workflow generates a data driven Ideal Customer Profile (ICP) and retrieves lookalike companies in Germany from the official data source (Handelsregister). It starts by ingesting a set of base company IDs, serializes them, and sends a recommendation request to the Implisense API to fetch similar companies. When explanation mode is enabled, the workflow extracts and processes term features to create a structured keyword digest and uses an LLM to generate an ICP narrative. The pipeline outputs both a clean list of lookalike companies, enriched with CRM-ready fields, and a detailed ICP report derived from Implisense feature statistics. How it works Input → Serialization → Lookalikes → Lists/Report Setup steps 1. Data Source ☐ Replace "Mock ICP Companies" with matched companies from the Implisense database ☐ Ensure output has: id 2. Configure Credentials: Set up RapidAPI API credentials Get your API key here: https://implisense.com/de/contact Insert your API Token in get_lookalikes (Basic auth) 3. Configure ICP Filters ☐ Edit "Build Recommendation Request" node ☐ Set locationsFilter (e.g., de-be, de-by, de-nw) ☐ Set industriesFilter (NACE codes, e.g., J62 for IT) ☐ Set sizesFilter (MICRO, SMALL, MEDIUM, LARGE) 4. Tune Results ☐ Adjust THRESHOLD in "Filter & Normalize Results" (default: 0.5) ☐ Adjust MIN_BASE_COMPANIES in "Collect Base Companies" (default: 3) ☐ Adjust size parameter in "Configuration" URL (default: 100) 5. CRM Integration ☐ Map fields in "list_of_companies" to match your CRM schema ☐ Add CRM upsert node after "list_of_companies" ☐ Use implisense-ID or domain as unique identifier Additional advice Strengthen Base Company Quality Use only highly representative base companies located in Germany that strongly match the intended ICP segment. Templates with dozens of mixed or heterogeneous IDs dilute the statistical signal in the /recommend endpoint and reduce relevance. Refine Filters Aggressively Limit recommendations by state, region, NACE code, or size class. Implisense returns cleaner results when the recommendation space is constrained. Removing unnecessary geography broadens noise. Increase the Size Parameter Raise the size parameter when building the request to give the ranking model more candidates. This materially improves downstream sorting and selection.
by Incrementors
Description: Automatically extracts all page URLs from website sitemaps, filters out unwanted sitemap links, and saves clean URLs to Google Sheets for SEO analysis and reporting. How It Works: This workflow automates the process of discovering and extracting all page URLs from a website's sitemap structure. Here's how it works step-by-step: Step 1: URL Input The workflow starts when you submit a website URL through a simple form interface. Step 2: Sitemap Discovery The system automatically generates and tests multiple possible sitemap URLs including /sitemap.xml, /sitemap_index.xml, /robots.txt, and other common variations. Step 3: Valid Sitemap Identification It sends HTTP requests to each potential sitemap URL and filters out empty or invalid responses, keeping only accessible sitemaps. Step 4: Nested Sitemap Processing For sitemap index files, the workflow extracts all nested sitemap URLs and processes each one individually to ensure complete coverage. Step 5: Page URL Extraction From each valid sitemap, it parses the XML content and extracts all individual page URLs using both XML <loc> tags and HTML links. Step 6: URL Filtering The system removes any URLs containing "sitemap" to ensure only actual content pages (like product, service, or blog pages) are retained. Step 7: Google Sheets Integration Finally, all clean page URLs are automatically saved to a Google Sheets document with duplicate prevention for easy analysis and reporting. Setup Steps: Estimated Setup Time: 10-15 minutes 1. Import the Workflow: Import the provided JSON file into your n8n instance. 2. Configure Google Sheets Integration: Set up Google Sheets OAuth2 credentials in n8n Create a new Google Sheet or use an existing one Update the "Save Page URLs to Sheet" node with your Google Sheet URL Ensure your sheet has a tab named "Your sheet tab name" with a column header "Column name" 3. Test the Workflow: Activate the workflow in n8n Use the form trigger URL to submit a test website URL Verify that URLs are being extracted and saved to your Google Sheet 4. Customize (Optional): Modify the sitemap URL patterns in the "Build sitemap URLs" node if needed Adjust the filtering criteria in the "Exclude the Sitemap URLs" node Update the Google Sheets column mapping as required Important Notes: Ensure your Google Sheets credentials have proper read/write permissions The workflow handles both XML sitemaps and robots.txt sitemap references Duplicate URLs are automatically prevented when saving to Google Sheets The workflow continues processing even if some sitemap URLs are inaccessible Need Help? For technical support or questions about this workflow: ✉️ info@incrementors.com or fill out this form: Contact Us
by Sri Kolagani
Transform your lead qualification process with automated AI-powered phone calls triggered directly from Salesforce lead creation. What this workflow does: Webhook Trigger: Receives new lead data from Salesforce Automated Calling: Initiates phone calls via Retell AI Smart Monitoring: Polls call status until completion AI Analysis: Uses OpenAI to analyze call transcripts Salesforce Integration: Creates follow-up tasks with insights Perfect for: Sales teams wanting to qualify leads faster Companies using Salesforce CRM Organizations looking to automate initial prospect outreach Teams wanting AI-powered call analysis You'll need: Salesforce org with lead creation triggers Retell AI account and agent setup OpenAI API access Basic n8n workflow knowledge Setup time: ~15 minutes Author: Sri Kolagani Template Type: Free
by Stephan Koning
WhatsApp Micro-CRM with Baserow & WasenderAPI Struggling to manage WhatsApp client communications? This n8n workflow isn't just automation; it's your centralized CRM solution for small businesses and freelancers. How it works Capture Every Message:** Integrates WhatsApp messages directly via WasenderAPI. Effortless Contact Management:** Automates contact data standardization and intelligently manages records (creating new or updating existing profiles). Rich Client Profiles:** Retrieves profile pictures and decrypts image media, giving you full context. Unified Data Hub:** Centralizes all conversations and media in Baserow, no more scattered interactions. Setup Steps Setup is incredibly fast; you can deploy this in under 15 minutes. Here's what you'll do: Link WasenderAPI:** Connect your WasenderAPI webhooks directly to n8n. Set up Baserow:** Duplicate our pre-built 'Contacts' (link) and 'Messages' (link) Baserow table templates. Secure Your Data:** Input your API credentials (WasenderAPI and Baserow) directly into n8n. Every single step is fully detailed in the workflow's sticky notes – we've made it foolproof. Requirements What do you need to get started? An active n8n instance (self-hosted or cloud). A WasenderAPI.com subscription or trial. A Baserow account. Note: Keep the flow layout as is! This will ensure that the flow is running in the correct order.
by Edisson Garcia
🚀 Google Drive Image Enhancement with Gemini nano banana This workflow automates image enhancement by integrating Google Drive with Google Gemini. It fetches unprocessed images from a source folder, applies AI-driven transformations based on a customizable prompt (e.g., clean and realistic product backgrounds), and uploads the enhanced results into a destination folder—streamlining e-commerce catalog preparation or creative pipelines. 🔑 Key Features Customizable Prompt Node** → Easily adjust the style/instructions for Gemini (e.g., backgrounds, lighting, focus). Google Drive Integration** → Automatically fetches images from a source folder and uploads results to a target folder. AI Processing via Gemini** → Converts original images to Base64, sends them with the prompt to Gemini, and returns enhanced versions. Image Filtering** → Processes only files whose mimeType contains "image". Loop Handling** → Iterates over all images in the source folder until all are processed. ⚙️ Setup Instructions Configure Prompt Open the promt node. Replace the text with your desired Gemini instructions (e.g., "Add a clean, realistic background for baby products"). Set Google Drive Folders In origin_folder → set Search Query to the name of the source folder (with unprocessed images). In destination_folder → set Search Query to the name of the target folder (to save results). Credentials Provide valid Google Drive OAuth2 credentials for both Drive nodes. Provide a Google Gemini API credential for the banana-request node. Run the Workflow Trigger from the init node. Workflow will download → convert → send to Gemini → reconvert → upload results automatically. 🛠 Customization Guidance Modify the prompt text to change how Gemini processes the images (background, style, product focus). Swap Search Query for folder IDs in Drive nodes if you need more precise targeting. Extend the workflow by chaining post-processing (e.g., watermarking, resizing, or tagging metadata). © 2025 Innovatex • Automation & AI Solutions • innovatexiot.carrd.co • LinkedIn
by Joe Swink
This workflow is a simple example of using n8n as an AI chat interface into Appian. It connects a local LLM, persistent memory, and API tools to demonstrate how an agent can interact with Appian tasks. What this workflow does Chat interface: Accepts user input through a webhook or chat trigger Local LLM (Ollama): Runs on qwen2.5:7b with an 8k context window Conversation memory: Stores chat history in Postgres, keyed by sessionId AI Agent node: Handles reasoning, follows system rules (helpful assistant persona, date formatting, iteration limits), and decides when to call tools Appian integration tools: List Tasks: Fetches a user’s tasks from Appian Create Task: Submits data for a new task in Appian (title, description, hours, cost) How it works A user sends a chat message The workflow normalizes fields such as text, username, and sessionId The AI Agent processes the message using Ollama and Postgres memory If the user asks about tasks, the agent calls the Appian APIs The result, either a task list or confirmation of a new task, is returned through the webhook Why this is useful Demonstrates how to build a basic Appian connector in n8n with an AI chat front end Shows how an LLM can decide when to call Appian APIs to list or create tasks Provides a pattern that can be extended with more Appian endpoints, different models, or custom system prompts
by Alejandro Scuncia
An extendable triage workflow that classifies severity, sets components, and posts actionable guidance for support engineers using n8n + Gemini + Cache Augmented Generation (CAG). Designed for Jira Service Management, but easily adaptable to Zendesk, Freshdesk, or ServiceNow. Description Support teams loose valuable time when tickets are misclassified: wrong severity, missing components, unclear scope. Engineers end up re-routing issues and chasing missing info instead of solving real problems. This workflow automates triage by combining domain rules with AI-driven classification and guidance, so engineers receive better-prepared tickets. It includes: ✅ Real-time ticket capture via webhook ✅ AI triage for severity and component ✅ CAG-powered guidance: 3 next steps + missing info ✅ Internal audit comment with justifications & confidence ✅ Structured metrics for reporting ⚙️ How It Works This workflow runs in 4 stages: 📥 Entry & Setup Webhook triggers on ticket creation Loads domain rules (priority policy, components, guidance templates) Sets confidence threshold & triage label 🧠 AI Analysis (Gemini + CAG) Builds structured payload with ticket + domain context Gemini proposes severity, component, guidance, missing info Output normalized for safe automation (valid JSON, conservative confidence) 🤖 Update & Audit Updates fields (priority, component, labels) if confidence ≥ threshold Posts internal audit comment with: 3 next steps Missing info to request Justifications + confidence 📊 Metrics Captures applied changes, confidence scores, and API statuses Enables reliability tracking & continuous improvement 🌟 Key Features CAG-powered guidance** → lightning-fast, context-rich next steps Explainable automation** → transparent audit comments for every decision Domain-driven rules** → adaptable to any product or support domain Portable* → swap JSM with *Zendesk, Freshdesk, ServiceNow** via HTTP nodes 🔐 Required Credentials | Tool | Use | |------|-----| | Jira Service Management | Ticketing system (API + comments) | | Google Gemini/Gemma | LLM analysis | | HTTP Basic Auth | For Jira API requests (bot user) | ⚠️ Setup tip: create a dedicated bot user in Jira Service Management with an API token. This ensures clean audit logs, proper permissions, and avoids mixing automation with human accounts. 🧰 Customization Tips Replace https://your-jsm-url/... with your own Jira Service Management domain. Update the credentials with the bot user’s API token created above. Swap Jira Service Management nodes with other ticketing systems like Zendesk, Freshdesk, or ServiceNow. Extend the domain schema (keywords, guidance_addons) to fit your product or support environment. 🗂️ Domain Schema This workflow uses a domain-driven schema to guide triage. It defines: Components** → valid areas for classification Priority policies & rules** → how severity is determined Keywords** → domain-specific signals (e.g., “API error”, “all users affected”) Guidance addons** → contextual next steps for engineers No-workaround phrases** → escalate severity if present ✨ The full domain JSON (with complete keyword & guidance mapping) is included as a sticky note inside the workflow. 💡 Use Cases Automated triage for IT & support tickets Incident classification with outage/security detection Contextual guidance for engineers in customer support Faster escalation and routing of critical issues 🧠 Who It’s For Support teams running Jira Service Management Platform teams automating internal ticket ops AI consultants prototyping practical triage workflows Builders exploring CAG today, RAG tomorrow 🚀 Try It Out! ⚙️ Import the Workflow in n8n (cloud or self-hosted). 🔑 Add Credentials (JSM API + Gemini key). ⚡ Configure Setup (confidence threshold, triage label, domain rules). 🔗 Connect Webhook in JSM → issue_created → n8n webhook URL. 🧪 Test with a Ticket → see auto-updates + AI audit comment. 🔄 Swap the Ticketing System → adapt HTTP nodes for Zendesk, Freshdesk, or ServiceNow. 💬 Have Feedback or Ideas? I’d Love to Hear This project is open, modular, and evolving. If you try it, adapt it, or extend it, I’d love to hear your feedback — let’s improve it together in the n8n builder community. 📧 ascuncia.es@gmail.com 🔗Linkedin
by Moe Ahad
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works Using chat node, ask a question pertaining to information stored in your MySQL database AI Agent converts your question to a SQL query AI Agent executes the SQL query and returns a result AI Agent can remember the previous 5 questions How to set up: Add your OpenAI API Key in "OpenAI Chat Model" node Add your MySQL credentials in the "SQL DB - List Tables and Schema" and "Execute a SQL Query in MySQL nodes" Update the database name in "SQL DB - List Tables and Schema" node. Replace "your_query_name" under the Query field with your actual database name After the above steps are completed, use the "When chat message received" node to ask a question about your data using plain English