by SpaGreen Creative
WhatsApp Number Verify & Confirmation System with Rapiwa API and Google Sheets Who is this for? This n8n workflow makes it easy to verify WhatsApp numbers submitted through a form. When someone fills out the form, the automation kicks in—capturing the data via a webhook, checking the WhatsApp number using the Rapiwa API, and sending a confirmation message if the number is valid. All submissions, whether verified or not, are logged into a Google Sheet with a clear status. It’s a great solution for businesses, marketers, or developers who need a reliable way to verify leads, manage event signups, or onboard customers using WhatsApp. How it works? This n8n automation listens for form submissions via a webhook, validates the provided WhatsApp number using the Rapiwa API, sends a confirmation message if the number is verified, and then appends the submission data to a Google Sheet, marking each entry as verified or unverified. Features Webhook Trigger**: Captures form submissions via HTTP POST Data Cleaning**: Formats and sanitizes the WhatsApp number Rapiwa API Integration**: Checks if the number is registered on WhatsApp Conditional Messaging**: Sends confirmation messages only to verified WhatsApp users Google Sheets Integration**: Appends all submissions with a validity status Auto Timestamping**: Adds the submission date in YYYY-MM-DD format Throttling Support**: Built-in delay to avoid hitting API or sheet rate limits Separation of Verified/Unverified**: Distinct handling for both types of entries Nodes Used in the Workflow Webhook** Format Webhook Response Data** (Code) Loop Over Items** (Split In Batches) Cleane Number** (Code) check valid whatsapp number** (HTTP Request) If** (Conditional) Send Message Using Rapiwa** verified append row in sheet** (Google Sheets) unverified append row in sheet** (Google Sheets) Wait1** How to set up? Webhook Add a Webhook node to the canvas. Set HTTP Method to POST. Copy the Webhook URL path (/a9b6a936-e5f2-4xxxxxxxxxe0a970d5). In your frontend form or app, make a POST request to: The request body should include: { "business_name": "ABC Corp", "location": "New York", "whatsapp": "+1 234-567-8901", "email": "user@example.com", "name": "John Doe" } Format Webhook Response Data Add a Code node after the Webhook node. Use this JavaScript code: const result = $input.all().map(item => { const body = item.json.body || {}; const submitted_date = new Date().toISOString().split('T')[0]; return { business_name: body.business_name, location: body.location, whatsapp: body.whatsapp, email: body.email, name: body.name, submitted_date: submitted_date }; }); return result; Loop Over Items Insert a SplitInBatches node after the data formatting. Set the Batch Size to a reasonable number (e.g. 1 or 10). This is useful for processing multiple submissions at once, especially if your webhook receives arrays of entries. Note: If you expect only one submission at a time, it still helps future-proof your workflow. Cleane Number Add a Code node named Cleane Number. Paste the following JavaScript: const items = $input.all(); const updatedItems = items.map((item) => { const waNo = item?.json["whatsapp"]; const waNoStr = typeof waNo === 'string' ? waNo : (waNo !== undefined && waNo !== null ? String(waNo) : ""); const cleanedNumber = waNoStr.replace(/\D/g, ""); item.json["whatsapp"] = cleanedNumber; return item; }); return updatedItems; Check WhatsApp Number using Rapiwa Add an HTTP Request node. Set: Method: POST URL: https://app.rapiwa.com/api/verify-whatsapp Add authentication: Type: HTTP Bearer Credentials: Select or create Rapiwa token In Body Parameters, add: number: ={{ $json.whatsapp }} This API call checks if the WhatsApp number exists and is valid. Expected Output: { "success": true, "data": { "number": "+88017XXXXXXXX", "exists": true, "jid": "88017XXXXXXXXXXXXX", "message": "✅ Number is on WhatsApp" } } Conditional If Check Add an If node after the Rapiwa validation. Configure the condition: Left Value: ={{ $json.data.exists }} Operation: true If true → valid number → go to messaging and append as "verified". If false → go to unverified sheet directly. Note: This step branches the flow based on the WhatsApp verification result. Send WhatsApp Message (Rapiwa) Add an HTTP Request node under the TRUE branch of the If node. Set: Method: POST URL: https://app.rapiwa.com/api/send-message Authentication: Type: HTTP Bearer Use same Rapiwa token Body Parameters: number: ={{ $json.data.phone }} message_type: text message: Hi {{ $('Cleane Number').item.json.name }}, Thanks! Your form has been submitted successfully. This sends a confirmation message via WhatsApp to the verified number. Google Sheets – Verified Data Add a Google Sheets node under the TRUE branch (after the message is sent). Set: Operation: Append Document ID: Choose your connected Google Sheet Sheet Name: Set to your active sheet (e.g., Sheet1) Column Mapping: Business Name: ={{ $('Cleane Number').item.json.business_name }} Location: ={{ $('Cleane Number').item.json.location }} WhatsApp Number: ={{ $('Cleane Number').item.json.whatsapp }} Email : ={{ $('Cleane Number').item.json.email }} Name: ={{ $('Cleane Number').item.json.name }} Date: ={{ $('Cleane Number').item.json.submitted_date }} validity: verified Use OAuth2 Google Sheets credentials for access. Note: Make sure the sheet has matching column headers. Google Sheets – Unverified Data Add a Google Sheets node under the FALSE branch of the If node. Use the same settings as the verified node, but set: validity: unverified This stores entries with unverified WhatsApp numbers in the same Google Sheet. Wait Node Add a Wait node after both Google Sheets nodes. Set Wait Time: Value: 2 seconds This delay prevents API throttling and adds buffer time before processing the next item in the batch. Google Sheet Column Reference A Google Sheet formatted like this ➤ Sample Sheet | Business Name | Location | WhatsApp Number | Email | Name | validity | Date | |---------------------|--------------------|------------------|----------------------|------------------|------------|------------| | SpaGreen Creative | Dhaka, Bangladesh | 8801322827799| contact@spagreen.net | Abdul Mannan | unverified | 2025-09-14 | | SpaGreen Creative | Bagladesh | 8801322827799| contact@spagreen.net| Abdul Mannan | verified | 2025-09-14 | > Note: The Email column includes a trailing space. Ensure your column headers match exactly to prevent data misalignment. How to customize the workflow Modify confirmation message with your brand tone Add input validation for missing or malformed fields Route unverified submissions to a separate spreadsheet or alert channel Add Slack or email notifications on new verified entries Notes & Warnings Ensure your Google Sheets credential has access to the target sheet Rapiwa requires an active subscription for API access Monitor Rapiwa API limits and adjust wait time as needed Keep your webhook URL protected to avoid misuse Support & Community WhatsApp Support: Chat Now Discord: Join SpaGreen Community Facebook Group: SpaGreen Support Website: spagreen.net Developer Portfolio: Codecanyon SpaGreen
by Udit Rawat
This workflow is for automating and centralizing your bookmarking process using AI-powered tagging and seamless integration between your Android device and a self-hosted Read Deck platform (https://readeck.org/en/). This workflow eliminates manual entry, organizes links with smart AI-generated tags, and ensures your bookmarks are always accessible, searchable, and secure. How It Works 📱 Android Shortcut Integration Use the HTTP Shortcuts app to create a 1-tap trigger that sends URLs and titles from your Android phone directly to n8n. 🤖 AI-Powered Tagging & Processing Leverage ChatGPT-4 to analyze content context and auto-generate relevant tags (e.g., “Tech Tutorials,” “Productivity Tools”). Extract clean titles and URLs from messy shared data (even from apps like Twitter or Reddit). 🔗 Readeck Integration Automatically save processed bookmarks to your self-hosted Readeck-like platform with structured metadata (title, URL, tags). ⚡ Silent Automation It runs in the background—no pop-ups or interruptions. 🔒 Pro Security Optional authentication (API tokens, headers) to protect your data. Use Case Perfect for researchers, content creators, or anyone drowning in tabs who wants to: Save articles, videos, or social posts in one click. Organize bookmarks with AI-generated tags. Build a personal knowledge base that’s always accessible. Tutorial 1️⃣ Set Up Android Shortcut Install "HTTP Shortcuts" and configure it to send data to your n8n webhook. Enable “Share Menu” to trigger bookmarks from any app. 2️⃣ Configure n8n Workflow Import the template and add your Read Deck API token (or similar service). 3️⃣ Test & Scale Share a link from your phone—watch it appear in Read Deck instantly! Add error handling or notifications for advanced use. Note: For self-hosted platforms, ensure your instance is publicly accessible (or use a VPN). Why Choose This Workflow? Zero Manual Entry: Save hours of copying/pasting. AI Organization: Say goodbye to chaotic bookmark folders. Privacy First: Host your data on your terms. Transform your bookmarking chaos into a streamlined system—try “Save: Bookmark” today! 🚀
by lin@davoy.tech
Are you looking to create a counseling chatbot that provides emotional support and mental health guidance through the LINE messaging platform ? This guide will walk you through connecting LINE with powerful AI language models like GPT-4 to build a chatbot that supports users in navigating their emotions, offering 24/7 conversational therapy and accessible mental health resources . By leveraging LINE's webhook integration and Azure OpenAI , this template allows you to design a chatbot that is both empathetic and efficient, ensuring users receive timely and professional responses. Whether you're a developer, counselor, or business owner, this guide will help you create a customizable counseling chatbot tailored to your audience's needs. Who Is This Template For? Developers who want to integrate AI-powered chatbots into the LINE platform for mental health applications. Counselors & Therapists looking to expand their reach and provide automated emotional support to clients outside of traditional sessions. Businesses & Organizations focused on improving mental health accessibility and offering innovative solutions to their users. Educators & Nonprofits seeking tools to provide free or low-cost counseling services to underserved communities. How this work? Line Webhook to receive new message Send loading animation in Line Check if the input is text or not Send the text as prompt in chat model (GPT 4o) Reply the message to user (you'll need 'edit field' to format it before reply) Pre-Requisites You have access to the LINE Developers Console. An Azure OpenAI account with necessary credentials. Set-up To receive messages from LINE, configure your webhook: Set up a webhook in LINE Developer Console. Copy the Webhook URL from the Line Chatbot node and paste it into the LINE Console. Ensure to remove any 'test' part when moving to production. The loading animation reassures users that the system is processing their request. Authorize using header authorization Message Handling Use the Check Message Type IsText? node to verify if the incoming message is text. If the message type is text, proceed with ChatGPT processing; otherwise, send a reply indicating non-text inputs are not supported. AI Agent Configuration Define the system message within the AI Agent node to guide the conversation based on desired interaction principles. Connect the Azure OpenAI Chat Model to the AI Agent. Formatting Responses Ensure responses are properly formatted before sending them back to the user. Reply Message Use the ReplyMessage - Line node to send the formatted response. Ensure proper header authorization using Bearer tokens.
by Rosh Ragel
Programatically Pull Square Report Data Into N8N What It Does This sub-workflow connects to the Square API and generates a daily sales summary report for all of your Square locations. The report matches the figures displayed in the Square Dashboard > Reports > Sales Summary. It’s designed to be reused in other workflows, ideal for reporting, data storage, accounting, or automation. Prerequisites To use this workflow, you'll need: Square API credentials (configured as a Header Auth credential) How to Set Up Square Credentials: Go to Credentials > Create New Choose Header Auth Set the Name to "Authorization" Set the Value to your Square Access Token (e.g., Bearer <your-api-key>) How It Works Trigger: The workflow is triggered as a sub-workflow, requiring a report_date input. Fetch Locations: An HTTP request gets all Square locations linked to your account. Fetch Orders: For each location, an HTTP request pulls completed orders for the specified report_date. Filter Empty Locations: Locations with no sales are ignored. Aggregate Sales Data: A Code node processes the order data and produces a summary identical to Square’s built-in Sales Summary report. Output: A cleaned, consistent summary that can be consumed by parent workflows or other nodes. Example Use Cases Automatically store daily sales data in Google Sheets, MySQL, or PostgreSQL for analysis and historical tracking Automatically send daily email or Slack reports to managers or finance teams Build weekly/monthly reports by looping over multiple dates Push sales data into accounting software like QuickBooks or Xero for automated bookkeeping Calculate commissions or rent payments based on sales volume How to Use Configure both HTTP Request nodes to use your Square API credential. If you are not in the Toronto/New York Timezone, please change the "start_at" and "end_at" parameters in the second HTTP node from "-05:00" to your local timezone Use as a sub-workflow inside a main workflow. Pass a report_date (formatted as YYYY-MM-DD) to the sub-workflow when you call it. Customization Options Add pagination to handle locations with more than 1,000 orders per day. Expand the workflow to save or send the report output via additional integrations (email, database, webhook, etc.). Why It's Useful This workflow saves time, reduces manual report pulling from Square, and enables smarter automation around sales data—whether for operations, finance, or performance monitoring.
by n8n Team
Who this template is for This template is for developers, content creators, or application builders who want to integrate an AI-powered text-to-image generation service into their applications or systems via an API endpoint. Use case Creating a secure API endpoint that converts text prompts into AI-generated images, with built-in content moderation to prevent inappropriate content generation. This can be used for creative applications, content creation tools, prototyping interfaces, or any system that needs on-demand image generation. How this workflow works Receives text prompt through a webhook endpoint Filters the prompt for inappropriate content using AI moderation Submits valid prompts to the Fal.ai Flux image generation service Polls for completion status and retrieves the generated image when ready Returns the image results in a structured JSON format to the client Set up steps Create a Fal.ai account and obtain API credentials Configure the HTTP Header Auth credentials with your Fal.ai API key Set up an OpenAI API key for the content moderation component Deploy the workflow and note the webhook URL for your API endpoint Test the endpoint by sending a POST request with a JSON body containing a "prompt" field
by Marth
How It Works ⚙️ This workflow acts as a communication bridge for your candidate pipeline: Webhook Trigger (Status Update): 🚀 The workflow activates when it receives data indicating a candidate's status has changed. This data could come from an internal form, a custom script, or a webhook from a basic Applicant Tracking System (ATS). Extract & Prepare Data (Function): 🧹 This node processes the incoming data. It extracts key information such as the candidate's name, the position they applied for, their previous status (if available), and their new status. It then formats this information into a clear, concise message suitable for a notification. Send Slack Notification: 📢 The prepared message is sent to a designated Slack channel (e.g., #recruitment-updates). This provides instant, real-time updates to your team, ensuring everyone is on the same page. (Alternative: Send Email Notification): This node can easily be swapped with a Gmail or SendGrid node to send email notifications to a predefined list of recipients instead of Slack. How to Set Up 🛠️ Follow these steps carefully to get your "Automated Candidate Status Notifier" workflow up and running: Import Workflow JSON: Open your n8n instance. Click on 'Workflows' in the left sidebar. Click the '+' button or 'New' to create a new workflow. Click the '...' (More Options) icon in the top right. Select 'Import from JSON' and paste the entire JSON code for this workflow. Configure Webhook Trigger (Status Update): Locate the 'Webhook Trigger (Status Update)' node (1. Webhook Trigger). Activate the workflow. n8n will provide a unique 'Webhook URL'. Crucial Step: Configure your data-sending system (e.g., a form submission, an ATS's webhook settings, or your custom script) to send candidate status update data (preferably in JSON format via POST request) to this n8n Webhook URL. Configure Extract & Prepare Data (Function): Locate the 'Extract & Prepare Data' node (2. Extract & Prepare Data). Adjust Field Names: Review the functionCode inside this node. You MUST adjust the variable assignments (e.g., inputData.candidateName, inputData.position) to accurately match the exact field names your sending system uses for candidate name, position, new status, old status, and notes. Use the 'Test Workflow' feature after sending a test webhook to inspect the incoming items[0].json.body data structure. The node automatically formats messages for Slack and Email. Configure Send Slack Notification: Locate the 'Send Slack Notification' node (3. Send Slack Notification). Credentials: Select your existing Slack API credential or click 'Create New' to set one up. Replace YOUR_SLACK_CREDENTIAL_ID with the actual ID or name of your credential from your n8n credentials. Channel: Replace YOUR_SLACK_CHANNEL_ID_OR_NAME with the exact ID or name of the Slack channel where you want to receive notifications (e.g., #recruitment-updates). OPTIONAL: Switch to Email Notification (Gmail/SendGrid/etc.): Delete the 'Send Slack Notification' node. Add a new 'Gmail' or 'SendGrid' (or your preferred email service) node. Configure its credentials. Set the 'To Email' field (e.g., your-team-email@example.com). Set the 'Subject' to ={{ $json.emailSubject }}. Set the 'HTML' body to ={{ $json.emailBody }}. Connect it from the 'Extract & Prepare Data' node. Review and Activate: Thoroughly review all node configurations. Ensure all placeholder values (like YOUR_...) are replaced and settings are correct. Click the 'Save' button in the top right corner. Finally, toggle the 'Inactive' switch to 'Active' to enable your workflow. 🟢 Your automated candidate status notifier is now live, keeping your team updated in real-time!
by Miquel Colomer
📝 Overview This workflow transforms n8n into a smart real-estate concierge by combining an AI chat interface with Bright Data’s marketplace datasets. Users interact via chat to specify city, price, bedrooms, and bathrooms—and receive a curated list of three homes for sale, complete with images and briefings. 🎥 Workflow in Action Want to see this workflow in action? Play the video 🔑 Key Features AI-Powered Chat Trigger:** Instantly start conversations using LangChain’s Chat Trigger node. Contextual Memory:** Retain up to 30 recent messages for coherent back-and-forth. Bright Data Integration:** Dynamically filter “FOR\_SALE” properties by city, price, bedrooms, and bathrooms (limit = 3). Automated Snapshot Retrieval:** Poll for dataset readiness and fetch full snapshot content. HTML-Formatted Output:** Present results as a ` of ` items, embedding property images. 🚀 How It Works (Step-by-Step) Prerequisites: n8n ≥ v1.0 Community nodes: install n8n-nodes-brightdata (the unverified community node) API credentials: OpenAI, Bright Data Webhook endpoint to receive chat messages Node Configuration: Chat Trigger: Listens for incoming chat messages; shows a welcome screen. Memory Buffer: Stores the last 30 messages for context. OpenAI Chat Model: Uses GPT-4o-mini to interpret user intent. Real Estate AI Agent: Orchestrates filtering logic, calls tools, and formats responses. Bright Data “Filter Dataset” Tool: Applies user-defined filters plus homeStatus = FOR_SALE. Wait & Recover Snapshot: Polls until snapshot is ready, then fetches content. Get Snapshot Content: Converts raw JSON into a structured list. Workflow Logic: User sends search criteria → Agent validates inputs. Agent invokes “Filter Dataset” once all filters are present. Upon dataset readiness, the snapshot is retrieved and parsed. Final output rendered as a bullet list with property images. Testing & Optimization: Use the built-in Execute Workflow trigger for rapid dry runs. Inspect node outputs in n8n’s UI; adjust filter defaults or snapshot limits. Tune OpenAI model parameters (e.g., maxIterations) for faster responses. Deployment & Monitoring: Activate the main workflow and expose its webhook URL. Monitor executions in the “Executions” panel; set up alerts for errors. Archive or duplicate workflows as needed; update credentials via credential manager. ✅ Pre-requisites Bright Data Account:** API key for marketplaceDataset. OpenAI Account:** Access to GPT-4o-mini model. n8n Version:** v1.0 or later with community node support. Permissions:** Webhook access, credential vault read/write. 👤 Who Is This For? Real-estate agencies and brokers seeking to automate client queries. PropTech startups building conversational search tools. Data analysts who want on-demand property snapshots without manual scraping. 📈 Benefits & Use Cases Time Savings:** Replace manual MLS searches with an AI-driven chat. Scalability:** Serve multiple clients simultaneously via webchat or embedded widget. Consistency:** Always report exactly three properties, ensuring concise results. Engagement:** Visual listings with images boost user satisfaction and conversion. Workflow created and verified by Miquel Colomer https://www.linkedin.com/in/miquelcolomersalas/ and N8nHackers https://n8nhackers.com
by Miquel Colomer
🎯 Precision Prospecting: Automate LinkedIn Lead Gen with n8n & Bright Data 📝 Overview This workflow turns n8n into an AI-powered prospector, automatically searching Google for LinkedIn profiles, scraping profile data via Bright Data, and summarizing key details. Ideal for sales and recruitment teams seeking targeted lead lists without manual research. 🎥 Workflow in Action Want to see this workflow in action? You have a chat window output below: 🔑 Key Features AI Chat Trigger**: Start prospecting via conversational prompts. Contextual Memory**: Retains the last 20 messages for coherent dialogue. Automated Google Search**: Generates site-restricted queries and fetches the top result. Bright Data Scraping**: Synchronously scrapes LinkedIn profile details by URL. Intelligent Filtering**: Extracts only valid LinkedIn profile links. Limit Control**: Returns a single, most relevant profile per request. LLM Summary**: Uses GPT-4o-mini to interpret and present scraped data. 🚀 How It Works (Step-by-Step) Prerequisites: n8n ≥ v1.0 with community nodes: install n8n-nodes-brightdata (not verified community node). API credentials: OpenAI, Bright Data (web unlocker zone “web\_unlocker1”). Webhook endpoint for chat trigger. Node Configuration: When chat message received (chatTrigger): Fires on user prompt. Simple Memory1 (memoryBufferWindow): Stores the last 20 chat messages. AI Prospector Agent (agent): Orchestrates search logic. Get 1 Google Result (brightData): Performs a Google search with site:linkedin.com/in. Get Links from Body (html): Extracts all `` hrefs from the search result page. Extract Links (splitOut): Splits out individual link entries. Filter only LinkedIn Profiles (filter): Ensures the URL contains “linkedin.com/” and starts with “https\://”. Limit (limit): Restricts output to the first valid profile URL. Search LinkedIn URI (toolWorkflow): Passes the URL to a secondary workflow to fetch the first link. Get LinkedIn Profile Data (brightDataTool): Scrapes the profile JSON. OpenAI Chat Model (lmChatOpenAi): Summarizes and formats the scraped data. Workflow Logic: User asks for a person by company & name, company & position, or LinkedIn URL. Agent builds a Google query (e.g., site:linkedin.com/in bright data cmo) and calls “Get 1 Google Result.” Extracted links are filtered and limited to the top valid profile. If user provided a direct LinkedIn URL, Agent skips search and scrapes immediately. Scraped profile JSON is passed to GPT-4o-mini to generate a concise summary. Testing & Optimization: Trigger via Execute Workflow for dry runs. Inspect intermediate node outputs in n8n’s Execution panel. Adjust maxIterations or memory window length for performance. Tune Bright Data zone or country settings to optimize scraping speed. Deployment & Monitoring: Activate the workflow and expose its webhook URL. Use n8n’s built-in Alerts or external monitoring (e.g., Slack notifications) on failures. Rotate credentials via n8n’s Credential Vault when needed. Version-control workflow via duplicates or Git-backed n8n instances. ✅ Pre-requisites OpenAI Account**: API key for GPT-4o-mini. Bright Data Account**: Zone “web\_unlocker1” and dataset gd_l1viktl72bvl7bjuj0. n8n Version**: v1.0+ with community nodes installed. Permissions**: Webhook access, Credential Vault read/write. 👤 Who Is This For? Sales teams automating outbound LinkedIn prospecting. Recruiters sourcing candidates without manual scraping. Marketing ops looking to enrich CRM with accurate profile data. 📈 Benefits & Use Cases Efficiency**: Reduces hours of manual search and data entry to seconds. Accuracy**: Filters out non-LinkedIn links and ensures high-quality results. Scalability**: Handle multiple prospect requests concurrently via chat or API. Integration**: Easily hook into CRMs or email sequencers downstream. Workflow created and verified by Miquel Colomer https://www.linkedin.com/in/miquelcolomersalas/ and N8nHackers https://n8nhackers.com
by Vitali
Template Description This n8n workflow is designed to manage Fastmail masked email addresses using the Fastmail API. The workflow provides the following functionalities: Retrieve all masked emails: Fetches all masked email addresses associated with the Fastmail account. Create masked email: Allows creating a new masked email with a specified state (pending, enabled, etc.). Update masked email state: Updates the state of a masked email such as enabling, disabling, or deleting it. Generate HTML template: Constructs an HTML table to display the masked emails in a user-friendly format. Steps to Make it Work Webhook Node: This node listens for incoming requests to manage masked emails. Needs Basic Authentication credentials to secure the endpoint. Session Node: Sends a request to obtain session information from Fastmail's API. Requires an HTTP Header Auth credential with your Fastmail API token. Switch Node: Routes the workflow based on the state of the incoming masked email request (pending, enabled, disabled, deleted). HTTP Request Nodes: These nodes handle various Fastmail API calls for masked emails (get, set, update, delete). All HTTP Request nodes require an HTTP Header Auth credential attached, using the Fastmail API token. Set Node: Gathers the retrieved masked email list into an array for further processing. HTML Node: Generates an HTML template to render the masked email addresses in a table format. Respond to Webhook Node: Sends back the HTML table to the client in response to the webhook request. Needed Credentials Fastmail Masked E-Mail Addresses: An API token from Fastmail's API. Each HTTP call to Fastmail requires this credential for authentication. Note Ensure that you correctly configure authentication for the API calls and webhook security. Use your actual Fastmail API credentials with the correct scope. The workflow assumes that the Fastmail API is correctly configured and accessible from your n8n instance. Update URLs and credentials IDs according to your n8n configuration.
by Friedemann Schuetz
Welcome to my Airbnb Telegram Agent Workflow! This workflow creates an intelligent Telegram bot that helps users search and find Airbnb accommodations using natural language queries and voice messages. DISCLAIMER: This workflow only works with self-hosted n8n instances! You have to install the n8n-nodes-mcp-client Community Node! What this workflow does This workflow processes incoming Telegram messages (text or voice) and provides personalized Airbnb accommodation recommendations. The AI agent understands natural language queries, searches through Airbnb data using MCP tools, and returns mobile-optimized results with clickable links, prices, and key details. Key Features: Voice message support (speech-to-text and text-to-speech) Conversation memory for context-aware responses Mobile-optimized formatting for Telegram Real-time Airbnb data access via MCP integration This workflow has the following sequence: Telegram Trigger - Receives incoming messages from users Text or Voice Switch - Routes based on message type Voice Processing (if applicable) - Downloads and transcribes voice messages Text Preparation - Formats text input for the AI agent Airbnb AI Agent - Core logic that: Lists available MCP tools for Airbnb data Executes searches with parsed parameters Formats results for mobile display Response Generation - Sends formatted text response Voice Response (optional) - Creates and sends audio summary Requirements: Telegram Bot API**: Documentation Create a bot via @BotFather on Telegram Get bot token and configure webhook OpenAI API**: Documentation Used for speech transcription (Whisper) Used for chat completion (GPT-4) Used for text-to-speech generation MCP Community Client Node**: Documentation Custom integration for Airbnb data Requires MCP server setup with Airbnb/Airtable connection Provides tools for accommodation search and details Important: You need to set up an MCP server with Airbnb data access. The workflow uses MCP tools to retrieve real accommodation data, so ensure your MCP server is properly configured with the Airtable/Airbnb integration. Configuration Notes: Update the Telegram chat ID in the trigger for your specific bot Modify the system prompt in the Airbnb Agent for different use cases The workflow supports both individual users and can be extended for group chats Feel free to contact me via LinkedIn, if you have any questions!
by explorium
Research Agent - Automated Sales Meeting Intelligence This n8n workflow automatically prepares comprehensive sales research briefs every morning for your upcoming meetings by analyzing both the companies you're meeting with and the individual attendees. The workflow connects to your calendar, identifies external meetings, enriches companies and contacts with deep intelligence from Explorium, and delivers personalized research reports—giving your sales team everything they need for informed, confident conversations. DEMO Template Demo Credentials Required To use this workflow, set up the following credentials in your n8n environment: Google Calendar (or Outlook) Type:** OAuth2 Used for:** Reading daily meeting schedules and identifying external attendees Alternative: Microsoft Outlook Calendar Get credentials at Google Cloud Console Explorium API Type:** Generic Header Auth Header:** Authorization Value:** Bearer YOUR_API_KEY Used for:** Business/prospect matching, firmographic enrichment, professional profiles, LinkedIn posts, website changes, competitive intelligence Get your API key at Explorium Dashboard Explorium MCP Type:** HTTP Header Auth Used for:** Real-time company intelligence and supplemental research for AI agents Connect to: https://mcp.explorium.ai/mcp Anthropic API Type:** API Key Used for:** AI-powered company and attendee research analysis Get your API key at Anthropic Console Slack (or preferred output) Type:** OAuth2 Used for:** Delivering research briefs Alternative options: Google Docs, Email, Microsoft Teams, CRM updates Go to Settings → Credentials, create these credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: Schedule Trigger Automatically runs the workflow on a recurring schedule. Type:** Schedule Trigger Default:** Every morning before business hours Customizable:** Set to any interval (hourly, daily, weekly) or specific times Alternative Trigger Options: Manual Trigger:** On-demand execution Webhook:** Triggered by calendar events or CRM updates Node 2: Get many events Retrieves meetings from your connected calendar. Calendar Source:** Google Calendar (or Outlook) Authentication:** OAuth2 Time Range:** Current day + 18 hours (configurable via timeMax) Returns:** All calendar events with attendee information, meeting titles, times, and descriptions Node 3: Filter for External Meetings Identifies meetings with external participants and filters out internal-only meetings. Filtering Logic: Extracts attendee email domains Excludes your company domain (e.g., 'explorium.ai') Excludes calendar system addresses (e.g., 'resource.calendar.google.com') Only passes events with at least one external attendee Important Setup Note: Replace 'explorium.ai' in the code node with your company domain to properly filter internal meetings. Output: Events with external participants only external_attendees: Array of external contact emails company_domains: Unique list of external company domains per meeting external_attendee_count: Number of external participants Company Research Pipeline Node 4: Loop Over Items Iterates through each meeting with external attendees for company research. Node 5: Extract External Company Domains Creates a deduplicated list of all external company domains from the current meeting. Node 6: Explorium API: Match Business Matches company domains to Explorium's business entity database. Method:** POST Endpoint:** /v1/businesses/match Authentication:** Header Auth (Bearer token) Returns: business_id: Unique Explorium identifier matched_businesses: Array of matches with confidence scores Company name and basic info Node 7: If Validates that a business match was found before proceeding to enrichment. Condition:** business_id is not empty If True:** Proceed to parallel enrichment nodes If False:** Skip to next company in loop Nodes 8-9: Parallel Company Enrichment Node 8: Explorium API: Business Enrich Endpoints:** /v1/businesses/firmographics/enrich, /v1/businesses/technographics/enrich Enrichment Types:** firmographics, technographics Returns:** Company name, description, website, industry, employees, revenue, headquarters location, ticker symbol, LinkedIn profile, logo, full tech stack, nested tech stack by category, BI & analytics tools, sales tools, marketing tools Node 9: Explorium API: Fetch Business Events Endpoint:** /v1/businesses/events/fetch Event Types:** New funding rounds, new investments, mergers & acquisitions, new products, new partnerships Date Range:** September 1, 2025 - November 4, 2025 Returns:** Recent business milestones and financial events Node 10: Merge Combines enrichment responses and events data into a single data object. Node 11: Cleans Merge Data Output Transforms merged enrichment data into a structured format for AI analysis. Node 12: Company Research Agent AI agent (Claude Sonnet 4) that analyzes company data to generate actionable sales intelligence. Input: Structured company profile with all enrichment data Analysis Focus: Company overview and business context Recent website changes and strategic shifts Tech stack and product focus areas Potential pain points and challenges How Explorium's capabilities align with their needs Timely conversation starters based on recent activity Connected to Explorium MCP: Can pull additional real-time intelligence if needed to create more detailed analysis Node 13: Create Company Research Output Formats the AI analysis into a readable, shareable research brief. Attendee Research Pipeline Node 14: Create List of All External Attendees Compiles all unique external attendee emails across all meetings. Node 15: Loop Over Items2 Iterates through each external attendee for individual enrichment. Node 16: Extract External Company Domains1 Extracts the company domain from each attendee's email. Node 17: Explorium API: Match Business1 Matches the attendee's company domain to get business_id for prospect matching. Method:** POST Endpoint:** /v1/businesses/match Purpose:** Link attendee to their company Node 18: Explorium API: Match Prospect Matches attendee email to Explorium's professional profile database. Method:** POST Endpoint:** /v1/prospects/match Authentication:** Header Auth (Bearer token) Returns: prospect_id: Unique professional profile identifier Node 19: If1 Validates that a prospect match was found. Condition:** prospect_id is not empty If True:** Proceed to prospect enrichment If False:** Skip to next attendee Node 20: Explorium API: Prospect Enrich Enriches matched prospect using multiple Explorium endpoints. Enrichment Types:** contacts, profiles, linkedin_posts Endpoints:** /v1/prospects/contacts/enrich, /v1/prospects/profiles/enrich, /v1/prospects/linkedin_posts/enrich Returns: Contacts:** Professional email, email status, all emails, mobile phone, all phone numbers Profiles:** Full professional history, current role, skills, education, company information, experience timeline, job titles and seniority LinkedIn Posts:** Recent LinkedIn activity, post content, engagement metrics, professional interests and thought leadership Node 21: Cleans Enrichment Outputs Structures prospect data for AI analysis. Node 22: Attendee Research Agent AI agent (Claude Sonnet 4) that analyzes prospect data to generate personalized conversation intelligence. Input: Structured professional profile with activity data Analysis Focus: Career background and progression Current role and responsibilities Recent LinkedIn activity themes and interests Potential pain points in their role Relevant Explorium capabilities for their needs Personal connection points (education, interests, previous companies) Opening conversation starters Connected to Explorium MCP: Can gather additional company or market context if needed Node 23: Create Attendee Research Output Formats attendee analysis into a readable brief with clear sections. Node 24: Merge2 Combines company research output with attendee information for final assembly. Node 25: Loop Over Items1 Manages the final loop that combines company and attendee research for output. Node 26: Send a message (Slack) Delivers combined research briefs to specified Slack channel or user. Alternative Output Options: Google Docs:** Create formatted document per meeting Email:** Send to meeting organizer or sales rep Microsoft Teams:** Post to channels or DMs CRM:** Update opportunity/account records with research PDF:** Generate downloadable research reports Workflow Flow Summary Schedule: Workflow runs automatically every morning Fetch Calendar: Pull today's meetings from Google Calendar/Outlook Filter: Identify meetings with external attendees only Extract Companies: Get unique company domains from external attendees Extract Attendees: Compile list of all external contacts Company Research Path: Match Companies: Identify businesses in Explorium database Enrich (Parallel): Pull firmographics, website changes, competitive landscape, events, and challenges Merge & Clean: Combine and structure company data AI Analysis: Generate company research brief with insights and talking points Format: Create readable company research output Attendee Research Path: Match Prospects: Link attendees to professional profiles Enrich (Parallel): Pull profiles, job changes, and LinkedIn activity Merge & Clean: Combine and structure prospect data AI Analysis: Generate attendee research with background and approach Format: Create readable attendee research output Delivery: Combine: Merge company and attendee research for each meeting Send: Deliver complete research briefs to Slack/preferred platform This workflow eliminates manual pre-meeting research by automatically preparing comprehensive intelligence on both companies and individuals—giving sales teams the context and confidence they need for every conversation. Customization Options Calendar Integration Works with multiple calendar platforms: Google Calendar:** Full OAuth2 integration Microsoft Outlook:** Calendar API support CalDAV:** Generic calendar protocol support Trigger Flexibility Adjust when research runs: Morning Routine:** Default daily at 7 AM On-Demand:** Manual trigger for specific meetings Continuous:** Hourly checks for new meetings Enrichment Depth Add or remove enrichment endpoints: Company:** Technographics, funding history, news mentions, hiring signals Prospects:** Contact information, social profiles, company changes Customizable:** Select only needed data to optimize speed and costs Research Scope Configure what gets researched: All External Meetings:** Default behavior Filtered by Keywords:** Only meetings with specific titles By Attendee Count:** Only meetings with X+ external attendees By Calendar:** Specific calendars only Output Destinations Deliver research to your preferred platform: Messaging:** Slack, Microsoft Teams, Discord Documents:** Google Docs, Notion, Confluence Email:** Gmail, Outlook, custom SMTP CRM:** Salesforce, HubSpot (update account notes) Project Management:** Asana, Monday.com, ClickUp AI Model Options Swap AI providers based on needs: Default: Anthropic Claude (Sonnet 4) Alternatives: OpenAI GPT-4, Google Gemini Setup Notes Domain Configuration: Replace 'explorium.ai' in the Filter for External Meetings code node with your company domain Calendar Connection: Ensure OAuth2 credentials have calendar read permissions Explorium Credentials: Both API key and MCP credentials must be configured Output Timing: Schedule trigger should run with enough lead time before first meetings Rate Limits: Adjust loop batch sizes if hitting API rate limits during enrichment Slack Configuration: Select destination channel or user for research delivery Data Privacy: Research is based on publicly available professional information and company data This workflow acts as your automated sales researcher, preparing detailed intelligence reports every morning so your team walks into every meeting informed, prepared, and ready to have meaningful conversations that drive business forward.
by Angel Menendez
Video Demo: Click here to see a video of this workflow in action. Summary Description: The "IT Department Q&A Workflow" is designed to streamline and automate the process of handling IT-related inquiries from employees through Slack. When an employee sends a direct message (DM) to the IT department's Slack channel, the workflow is triggered. The initial step involves the "Receive DMs" node, which listens for new messages. Upon receiving a message, the workflow verifies the webhook by responding to Slack's challenge request, ensuring that the communication channel is active and secure. Once the webhook is verified, the workflow checks if the message sender is a bot using the "Check if Bot" node. If the sender is identified as a bot, the workflow terminates the process to avoid unnecessary actions. If the sender is a human, the workflow sends an acknowledgment message back to the user, confirming that their query is being processed. This is achieved through the "Send Initial Message" node, which posts a simple message like "On it!" to the user's Slack channel. The core functionality of the workflow is powered by the "AI Agent" node, which utilizes the OpenAI GPT-4 model to interpret and respond to the user's query. This AI-driven node processes the text of the received message, generating an appropriate response based on the context and information available. To maintain conversation context, the "Window Buffer Memory" node stores the last five messages from each user, ensuring that the AI agent can provide coherent and contextually relevant answers. Additionally, the workflow includes a custom Knowledge Base (KB) tool (see that tool template here) that integrates with the AI agent, allowing it to search the company's internal KB for relevant information. After generating the response, the workflow cleans up the initial acknowledgment message using the "Delete Initial Message" node to keep the conversation thread clean. Finally, the generated response is sent back to the user via the "Send Message" node, providing them with the information or assistance they requested. This workflow effectively automates the IT support process, reducing response times and improving efficiency. To quickly deploy the Knowledge Ninja app in Slack, use the app manifest below and don't forget to replace the two sample urls: { "display_information": { "name": "Knowledge Ninja", "description": "IT Department Q&A Workflow", "background_color": "#005e5e" }, "features": { "bot_user": { "display_name": "IT Ops AI SlackBot Workflow", "always_online": true } }, "oauth_config": { "redirect_urls": [ "Replace everything inside the double quotes with your slack redirect oauth url, for example: https://n8n.domain.com/rest/oauth2-credential/callback" ], "scopes": { "user": [ "search:read" ], "bot": [ "chat:write", "chat:write.customize", "groups:history", "groups:read", "groups:write", "groups:write.invites", "groups:write.topic", "im:history", "im:read", "im:write", "mpim:history", "mpim:read", "mpim:write", "mpim:write.topic", "usergroups:read", "usergroups:write", "users:write", "channels:history" ] } }, "settings": { "event_subscriptions": { "request_url": "Replace everything inside the double quotes with your workflow webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad", "bot_events": [ "message.im" ] }, "org_deploy_enabled": false, "socket_mode_enabled": false, "token_rotation_enabled": false } }