by Manu
For every release on GitHub this workflow will create an issue on GitLab. Copy workflow to your n8n Fill in missing fields (credentials & repo names) Based on Cron node to be able to track github repos you're not a member of (as you won't be able to create a webhook). If you do own the repo, you could replace Cron & GH node with a GitHub Trigger.
by Charles
Modern AI systems are powerful but pose privacy risks when handling sensitive data. Organizations need AI capabilities while ensuring: ✅ Sensitive data never leaves secure environments ✅ Compliance with regulations (GDPR, HIPAA, PCI, SOX) ✅ Real-time decision making about data sensitivity ✅ Comprehensive audit trails for regulatory review The Concept: Intelligent Data Classification + Smart Routing The goal of this concept is to build the foundations of the safe and compliant use of LLMs in Agentic workflows by automatically detecting sensitive data, applying sanitization rules, and intelligently routing requests through secure processing channels. This workflow will analyze the user's chat or webhook input and attempt to detect PII using the Enhanced PII Pattern Detector. If detected, the workflow will process that input via a series of Compliance, Auditing, and Security steps which log and sanitizes the request prior to any LLM being pinged. Why Multi-Tier Routing? Traditional systems use binary decisions (sensitive/not sensitive). Our 3-tier approach provides: ✅ Granular Security: Critical PII gets maximum protection ✅ Performance Optimization: Clean data gets full cloud capabilities ✅ Cost Efficiency: Expensive local processing only when needed ✅ User Experience: Maintains conversational flow across security levels Why Context-Aware Detection? Regex patterns alone miss contextual sensitivity. Our approach: ✅ Catches Intent: "Bank account" discussion is sensitive even without account numbers ✅ Reduces False Negatives: Medical discussions stay secure even without explicit medical IDs ✅ Proactive Protection: Identifies sensitive contexts before PII is shared ✅ Compliance Alignment: Matches how regulations actually define sensitive data Why Risk Scoring vs Binary Classification? Binary PII detection creates artificial boundaries. Risk scoring provides: ✅ Nuanced Decisions: Multiple low-risk patterns might aggregate to high risk ✅ Adaptive Thresholds: Organizations can adjust sensitivity based on their needs ✅ Better UX: Users aren't unnecessarily restricted for low-risk scenarios ✅ Audit Transparency: Clear reasoning for every routing decision Why Comprehensive Monitoring? Privacy systems require trust and verification: ✅ Compliance Proof: Audit trails demonstrate regulatory compliance ✅ Performance Optimization: Identify bottlenecks and improve efficiency ✅ Security Validation: Ensure no sensitive data leakage occurs ✅ Operational Insights: Understand usage patterns and system health How to Install: All that you will need for this workflow are credentials for your LLM providers such as Ollama, OpenRouter, OpenAI, Anthropic, etc. This workflow is customizable and allows the user to define the best LLM and storage/memory solutions for their specific use case.
by Arnaud MARIE
Replicate Line Items on New Deal in HubSpot Workflow Use Case This workflow solves the problem of manually copying line items from one deal to another in HubSpot, reducing manual work and minimizing errors. What this workflow does Triggers** upon receiving a webhook with deal IDs. Retrieves** the IDs of the won and created deals. Fetches** line items associated with the won deal. Extracts** product SKUs from the retrieved line items. Fetches** product details based on SKUs. Creates** new line items for the created deal and associates them. Sends** a Slack notification with success details. Step up steps Create a HubSpot Deal Workflow 1.1 Set up your trigger (ex: when deal stage = Won) 1.2 Add step : Create Record (deal) 1.3 Add Step : Send webhook. The webhook should be a Get to your n8n first trigger. Set two query parameter : deal_id_won as the Record ID of the deal triggering the HubSpot Workflow deal_id_create as the Record ID of the deal created above. Click Insert Data -> The created object Set up your HubSpot App token in HubSpot -> Settings -> Integration -> Private Apps Set up your HubSpot Token integration using the predefined model. Set up your Slack connection Add an error Workflow to monitor errors
by Friedemann Schuetz
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. Welcome to my Wikipedia Podcast Telegram Bot Workflow! This workflow creates an intelligent Telegram bot that transforms Wikipedia articles into engaging 5-minute podcast episodes using natural language queries and voice messages. What this workflow does This workflow processes incoming Telegram messages (text or voice, e.g. "Berlin") and generates professional podcast content about any Wikipedia topic (e.g. "Berlin", "Shakespeare", etc.). The AI agent researches the requested subject, creates a structured podcast script, and delivers it as high-quality audio directly through Telegram. Key Features: Voice message support (speech-to-text and text-to-speech) Wikipedia research integration for accurate content Professional podcast structure (intro, main content, outro) Natural-sounding AI voice synthesis Conversational and educational tone optimized for audio consumption This workflow has the following sequence: Telegram Trigger - Receives incoming messages (text or voice) from users via Telegram bot Text or Voice Switch - Routes the message based on input type (text message vs. voice message) Voice Message Processing (if voice input): Retrieval of voice file from Telegram Transcription of voice message to text using OpenAI Whisper Text Message Preparation (if text input) - Prepares the text message for the AI agent Wikipedia Podcast Agent - Core AI agent that: Researches the requested topic using Wikipedia tool Creates a professional 5-minute podcast script (600-750 words) Follows structured format: intro, main content, outro Uses conversational, accessible, and enthusiastic tone ElevenLabs Text to Speech - Converts the podcast script into natural-sounding audio using AI voice synthesis Send Voice Response - Delivers the generated podcast audio back to the user via Telegram Requirements: Telegram Bot API**: Documentation Create a bot via @BotFather on Telegram Get bot token and configure webhook Anthropic API** (Claude 4 Sonnet): Documentation Used for AI agent processing and podcast script generation Provides Wikipedia research capabilities OpenAI API**: Documentation Used for speech transcription (Whisper model) ElevenLabs API**: Documentation Used for high-quality text-to-speech generation Provides natural-sounding voice synthesis Important: The workflow uses the Wikipedia tool integrated with Claude 4 Sonnet to ensure accurate and comprehensive research. The AI agent is specifically prompted to create engaging, educational podcast content suitable for audio consumption. Configuration Notes: Update the Telegram chat ID in the trigger for your specific bot Modify the voice selection in ElevenLabs for different narrator styles The system prompt can be customized for different podcast formats or target audiences Supports both individual users and can be extended for group chats Feel free to contact me via LinkedIn, if you have any questions!
by Angel Menendez
Video Demo: Click here to see a video of this workflow in action. Summary Description: The "IT Department Q&A Workflow" is designed to streamline and automate the process of handling IT-related inquiries from employees through Slack. When an employee sends a direct message (DM) to the IT department's Slack channel, the workflow is triggered. The initial step involves the "Receive DMs" node, which listens for new messages. Upon receiving a message, the workflow verifies the webhook by responding to Slack's challenge request, ensuring that the communication channel is active and secure. Once the webhook is verified, the workflow checks if the message sender is a bot using the "Check if Bot" node. If the sender is identified as a bot, the workflow terminates the process to avoid unnecessary actions. If the sender is a human, the workflow sends an acknowledgment message back to the user, confirming that their query is being processed. This is achieved through the "Send Initial Message" node, which posts a simple message like "On it!" to the user's Slack channel. The core functionality of the workflow is powered by the "AI Agent" node, which utilizes the OpenAI GPT-4 model to interpret and respond to the user's query. This AI-driven node processes the text of the received message, generating an appropriate response based on the context and information available. To maintain conversation context, the "Window Buffer Memory" node stores the last five messages from each user, ensuring that the AI agent can provide coherent and contextually relevant answers. Additionally, the workflow includes a custom Knowledge Base (KB) tool (see that tool template here) that integrates with the AI agent, allowing it to search the company's internal KB for relevant information. After generating the response, the workflow cleans up the initial acknowledgment message using the "Delete Initial Message" node to keep the conversation thread clean. Finally, the generated response is sent back to the user via the "Send Message" node, providing them with the information or assistance they requested. This workflow effectively automates the IT support process, reducing response times and improving efficiency. To quickly deploy the Knowledge Ninja app in Slack, use the app manifest below and don't forget to replace the two sample urls: { "display_information": { "name": "Knowledge Ninja", "description": "IT Department Q&A Workflow", "background_color": "#005e5e" }, "features": { "bot_user": { "display_name": "IT Ops AI SlackBot Workflow", "always_online": true } }, "oauth_config": { "redirect_urls": [ "Replace everything inside the double quotes with your slack redirect oauth url, for example: https://n8n.domain.com/rest/oauth2-credential/callback" ], "scopes": { "user": [ "search:read" ], "bot": [ "chat:write", "chat:write.customize", "groups:history", "groups:read", "groups:write", "groups:write.invites", "groups:write.topic", "im:history", "im:read", "im:write", "mpim:history", "mpim:read", "mpim:write", "mpim:write.topic", "usergroups:read", "usergroups:write", "users:write", "channels:history" ] } }, "settings": { "event_subscriptions": { "request_url": "Replace everything inside the double quotes with your workflow webhook url, for example: https://n8n.domain.com/webhook/99db3e73-57d8-4107-ab02-5b7e713894ad", "bot_events": [ "message.im" ] }, "org_deploy_enabled": false, "socket_mode_enabled": false, "token_rotation_enabled": false } }
by jason
This workflow was originally presented at the February 2022 n8n Meetup. Requirements In order to use this workflow, you will need the following in place: A configured Baserow account A group in Baserow called User Empowerment Demo A database in the User Empowerment Demo called Office Shopping List Inside the Office Shopping List database, you will need two tables: Shopping List: Column 1 - Single line text column named Item Shopper: Column 1 - Single line text column named Name Column 2 - Email column named Email An email account for sending out alerts Customization To make this workflow work for you, please customize the following items: All Baserow nodes will need to be updated with your own credentials, database, tables and fields The Send Shopping List node will need to be configured with your email credentials and email addresses The Create Shopper Form Set node will need to have the code in the HTML value modified to reflect your Production URL from the Submit Shopper node (See instructions below) The Cron node will need to be modified to reflect the timing that you wish to use Modifying the Webform The webform is the piece that people normally want to customize but is often the most complex because it is raw HTML. Here are some quick tips for making changes to the form. Webform Nodes There are two nodes that control what you see in the form: Create Shopper Form - displays the form and submits it to the correct webhook Create Response Page - displays the results when the form is submitted Editing the Webform The easiest way that I have found to edit the webform is to: Open up the Set node (Create Shopper Form or Create Response Page) that contains the HTML you wish to edit. Copy the contents of the HTML value to your favourite HTML editor Make your changes Paste the updated HTML back into the Set node Changing the Webhook URL the Webform Posts To In order for the webform to work properly, do the following: Determine the Production URL for the Submit Shopper webhook node In the Create Shopper Form node, look for the following line in the HTML value: form action="https://tephlon.app.n8n.cloud/webhook/submit-shopper" method="POST" Replace https://tephlon.app.n8n.cloud/webhook/submit-shopper with your Production URL Changing the Webform Image The image that is in the webform is actually embedded in the HTML in each of the Create Shopper Form or Create Response Page Set nodes and can be modified from there using these steps: Open up the appropriate Set node In the HTML value, find the line that starts with background-image:. It will be followed by a long string that looks like random characters Using a tool like Image to Base64 Converter, upload your image and generate a new CSS background source Replace the original background-image: line (including all the "random" characters) with the new generated CSS background source
by Eric
This is a specific use case. The ElevenLabs guide for Cal.com bookings is comprehensive but I was having trouble with the booking API request. So I built a simple workflow to validate the request and handle the booking creation. Who's this for? You have an ElevenLabs voice agent (or other external service) booking meetings in your Cal.com account and you want more control over the book_meeting tool called by the voice agent. How's it work? Request is received by the webhook trigger node Request sent from ElevenLabs voice agent, or other source Request body contains contact info for the user with whom a meeting will be booked in Cal.com Workflow validates input data for required fields in Cal.com If validation fails, a 400 bad request response is returned If valid, meeting is booked in Cal.com api How do I use this? Create a custom tool in the ElevenLabs agent setup, and connect it to the webhook trigger in this workflow. Add authorization for security. Instruct your voice agent to call this tool after it has collected the required information from the user. Expected input structure Note: Modify this according to your needs, but be sure to reflect your changes in all following nodes. Requirements here depend on required fields in your Cal.com event type. If you have multiple event types in Cal.com with varying required fields, you'll need to handle this in this workflow, and provide appropriate instructions in your *voice agent prompt*. "body": { "attendee_name": "Some Guy", "start": "2025-07-07T13:30:00Z", "attendee_phone": "+12125551234", "attendee_timezone": "America/New_York", "eventTypeId": 123456, "attendee_email": "someguy@example.com", "attendee_company": "Example Inc", "notes": "Discovery call to find synergies." } Modifications Note: ElevenLabs doesn't handle webhook response headers or body, and only recognizes the response code. In other words, if the workflow responds with 400 Bad request that's the only info the voice agent gets back; it doesn't get back any details, eg. "User email still needed". You can modify the structure of the expected webhook request body, and then you should reflect that structure change in all following nodes in the workflow. Ie. if you change attendee_name to attendeeFirstName and attendeeLastName then you need to make this change in the following nodes that use these properties. You can also require or make optional other user data for the Cal.com event type which would reduce or increase the data the voice agent must collect from the user. You can modify the authorization of this webhook to meet your security needs. ElevenLabs has some limitations and you should be mindful of those, but it also offers a secret feature with proves useful. An improvement to this workflow could include a GET request to a CRM or other db to get info on the user interacting with the voice agent. This could reduce some of the data collection needed from the voice agent, like if you already have the user's email address, for example. I believe you can also get the user's phone number if the voice agent is set up on a dial-in interface, so then the agent wouldn't need to ask for it. This all depends on your use case. A savvy step might be prompting the voice agent to get an email, and using the email in this workflow to pull enrichment data from Apollo.io or similar ;-)
by David w/ SimpleGrow
This n8n workflow tracks user engagement in a specific WhatsApp group by capturing incoming messages via a Whapi webhook. It first filters messages to ensure they come from the correct group, then identifies the message type—text, emoji reaction, voice, or image. The workflow searches for the user in an Airtable database using their WhatsApp ID and increments their message count by one. It updates the Airtable record with the new count and the date of the last interaction. This automated process helps measure user activity and supports engagement initiatives like weekly raffles or rewards. The system is flexible and can be expanded to include more message types or additional actions. Overall, it provides a seamless way to encourage and track user participation in your WhatsApp community.
by Samir Saci
Tags: Supply Chain, Logistics, Control Tower Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. We design tools to help companies improve their logistics processes using data analytics, AI, and automation—to reduce costs and minimize environmental impact. > Let’s use N8N to build smarter and more sustainable supply chains! 📬 For business inquiries, you can add me on LinkedIn Who is this template for? This workflow template is designed for logistics operations that need a monitoring solution for their distribution chains. Connected to your Transportation Management Systems, this AI agent can answer any question about the shipments handled by your distribution teams. How does it work? The workflow is connected to a Google BigQuery table that stores outbound order data (customer deliveries). Here’s what the AI agent does: 🤔 Receives a user question via chat. 🧠 Understands the request and generates the correct SQL query. ✅ Executes the SQL query using a BigQuery node. 💬 Responds to the user in plain English. Thanks to the chat memory, users can ask follow-up questions to dive deeper into the data. What do I need to get started? This workflow requires no advanced programming skills. You’ll need: A Google BigQuery account with an SQL table storing transactional records. An OpenAI API key (GPT-4o) for the chat model. Next Steps Follow the sticky notes in the workflow to configure each node and start using AI to support your supply chain operations. 🎥 Watch My Tutorial 🚀 Curious how N8N can transform your logistics operations? Notes The chat trigger can easily be replaced with Teams, Telegram, or Slack for a better user experience. You can also connect this to a customer chat window using a webhook. This workflow was built using N8N version 1.82.1 Submitted: March 24, 2025
by Davide
This workflow is designed to intelligently route user queries to the most suitable large language model (LLM) based on the type of request received in a chat environment. It uses structured classification and model selection to optimize both performance and cost-efficiency in AI-driven conversations. It dynamically routes requests to specialized AI models based on content type, optimizing response quality and efficiency. Benefits Smart Model Routing**: Reduces costs by using lighter models for general tasks and reserving heavier models for complex needs. Scalability**: Easily expandable by adding more request types or LLMs. Maintainability**: Clear logic separation between classification, model routing, and execution. Personalization**: Can be integrated with session IDs for per-user memory, enabling personalized conversations. Speed Optimization**: Fast models like GPT-4.1 mini or Gemini Flash are chosen for tasks where speed is a priority. How It Works Input Handling: The workflow starts with the "When chat message received" node, which triggers the process when a chat message is received. The input includes the chat message (chatInput) and a session ID (sessionId). Request Classification: The "Request Type" node uses an OpenAI model (gpt-4.1-mini) to classify the incoming request into one of four categories: general: For general queries. reasoning: For reasoning-based questions. coding: For code-related requests. google: For queries requiring Google tools. The classification is structured using the "Structured Output Parser" node, which enforces a consistent output format. Model Selection: The "Model Selector" node routes the request to one of four AI models based on the classification: Opus 4 (Claude 4 Sonnet): Used for coding requests. Gemini Thinking Pro: Used for reasoning requests. GPT 4.1 mini: Used for general requests. Perplexity: Used for search (Google-related) requests. AI Processing: The selected model processes the request via the "AI Agent" node, which includes intermediate steps for complex tasks. The "Simple Memory" node retains session context using the provided sessionId, enabling multi-turn conversations. Output: The final response is generated by the chosen model and returned to the user. Set Up Steps Configure Trigger: Ensure the "When chat message received" node is set up with the correct webhook ID to receive chat inputs. Define Classification Logic: Adjust the prompt in the "Request Type" node to refine classification accuracy. Verify the output schema in the "Structured Output Parser" node matches expected categories (general, reasoning, coding, google). Connect AI Models: Link each model node (Opus 4, Gemini Thinking Pro, GPT 4.1 mini, Perplexity) to the "Model Selector" node. Ensure credentials (API keys) for each model are correctly configured in their respective nodes. Set Up Memory: Configure the "Simple Memory" node to use the sessionId from the input for context retention. Test Workflow: Send test inputs to verify classification and model routing. Check intermediate outputs (e.g., request_type) to ensure correct model selection. Activate Workflow: Toggle the workflow to "Active" in n8n after testing. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Baptiste Fort
What if your quote requests managed themselves? Every quote request is a potential deal — but only if it's handled quickly, properly, and without things falling through the cracks. What if instead of copy-pasting emails and pinging teammates manually, your entire process just... ran itself? This automation makes it happen: it captures form submissions, notifies your sales team on Slack, stores leads in Airtable, and sends an email confirmation to the client — all in one seamless n8n flow. ⚙️ Tools used Tally** – to collect client quote requests n8n** – to automate everything, no code needed Airtable** – to store leads and track status Slack** – to instantly notify your sales team Gmail** – to confirm the request with the client 🧩 Flow structure overview Trigger from a Tally form using a webhook Extract and format the data Create a new record in Airtable Send a message to Slack Wait 5 minutes Send an email confirmation via Gmail 📥 Step 1 – Webhook (Tally) This node listens for incoming quote requests from the Tally form. HTTP Method:** POST Path:** /Request a Quote Authentication:** None Respond:** Immediately The data arrives as an array inside body.data.fields. Each field has a label and a value that we’ll need to map manually. 🧹 Step 2 – Edit Fields (Set) This step extracts usable values from the raw form data. Example mapping: Name = {{ $json.body.data.fields[0].label }} Email Address = {{ $json.body.data.fields[1].value }} Type of Service Needed = {{ $json.body.data.fields[2].value }} Estimated Budget = {{ $json.body.data.fields[3].value }} Preferred Timeline = {{ $json.body.data.fields[4].value }} Additional Details or Questions = {{ $json.body.data.fields[5].value }} 📊 Step 3 – Create record in Airtable We send the cleaned fields into a database (CRM) in Airtable. Operation:** Create Base & Table:** Request a Quote - Airtable Base Mapping:** Manual field-to-column matching Each quote submission becomes a new record with all project details. 📣 Step 4 – Send a message to Slack This node notifies your sales team immediately in a Slack channel. Message format: :new: New quote request received! 👤 Name: {{ $json.fields.Name }} 📧 Email: {{ $json.fields.Email }} 💼 Service: {{ $json.fields["Type of Service"] }} 💰 Budget: {{ $json.fields["Estimated Budget (€)"] }} ⏱️ Timeline: {{ $json.fields["Preferred Timeline"] }} 📝 Notes: {{ $json.fields["Additional Details"] }} ⏳ Step 5 – Wait 5 minutes This node simply delays the email by 5 minutes. Why? To give a human salesperson time to reach out manually before the automated confirmation goes out. It adds a personal buffer. 📧 Step 6 – Send confirmation via Gmail To:** {{ \$('Edit Fields').item.json\["Email Address"] }} Subject:** Thanks for your quote request 🙌 Email Type:** HTML Message body: Hi {{ $('Edit Fields').item.json.Name }}, Thanks a lot for your quote request — we’ve received your information! Our team will get back to you within the next 24 hours to discuss your project. Talk soon, — The WebExperts Team ✅ Final result With this automation in place: The client feels acknowledged and taken seriously Your team gets notified in real time You store everything in a clean, structured database All this without writing a single line of backend code. It’s fast, scalable, and business-ready.
by Miquel Colomer
🎯 Precision Prospecting: Automate LinkedIn Lead Gen with n8n & Bright Data 📝 Overview This workflow turns n8n into an AI-powered prospector, automatically searching Google for LinkedIn profiles, scraping profile data via Bright Data, and summarizing key details. Ideal for sales and recruitment teams seeking targeted lead lists without manual research. 🎥 Workflow in Action Want to see this workflow in action? You have a chat window output below: 🔑 Key Features AI Chat Trigger**: Start prospecting via conversational prompts. Contextual Memory**: Retains the last 20 messages for coherent dialogue. Automated Google Search**: Generates site-restricted queries and fetches the top result. Bright Data Scraping**: Synchronously scrapes LinkedIn profile details by URL. Intelligent Filtering**: Extracts only valid LinkedIn profile links. Limit Control**: Returns a single, most relevant profile per request. LLM Summary**: Uses GPT-4o-mini to interpret and present scraped data. 🚀 How It Works (Step-by-Step) Prerequisites: n8n ≥ v1.0 with community nodes: install n8n-nodes-brightdata (not verified community node). API credentials: OpenAI, Bright Data (web unlocker zone “web\_unlocker1”). Webhook endpoint for chat trigger. Node Configuration: When chat message received (chatTrigger): Fires on user prompt. Simple Memory1 (memoryBufferWindow): Stores the last 20 chat messages. AI Prospector Agent (agent): Orchestrates search logic. Get 1 Google Result (brightData): Performs a Google search with site:linkedin.com/in. Get Links from Body (html): Extracts all `` hrefs from the search result page. Extract Links (splitOut): Splits out individual link entries. Filter only LinkedIn Profiles (filter): Ensures the URL contains “linkedin.com/” and starts with “https\://”. Limit (limit): Restricts output to the first valid profile URL. Search LinkedIn URI (toolWorkflow): Passes the URL to a secondary workflow to fetch the first link. Get LinkedIn Profile Data (brightDataTool): Scrapes the profile JSON. OpenAI Chat Model (lmChatOpenAi): Summarizes and formats the scraped data. Workflow Logic: User asks for a person by company & name, company & position, or LinkedIn URL. Agent builds a Google query (e.g., site:linkedin.com/in bright data cmo) and calls “Get 1 Google Result.” Extracted links are filtered and limited to the top valid profile. If user provided a direct LinkedIn URL, Agent skips search and scrapes immediately. Scraped profile JSON is passed to GPT-4o-mini to generate a concise summary. Testing & Optimization: Trigger via Execute Workflow for dry runs. Inspect intermediate node outputs in n8n’s Execution panel. Adjust maxIterations or memory window length for performance. Tune Bright Data zone or country settings to optimize scraping speed. Deployment & Monitoring: Activate the workflow and expose its webhook URL. Use n8n’s built-in Alerts or external monitoring (e.g., Slack notifications) on failures. Rotate credentials via n8n’s Credential Vault when needed. Version-control workflow via duplicates or Git-backed n8n instances. ✅ Pre-requisites OpenAI Account**: API key for GPT-4o-mini. Bright Data Account**: Zone “web\_unlocker1” and dataset gd_l1viktl72bvl7bjuj0. n8n Version**: v1.0+ with community nodes installed. Permissions**: Webhook access, Credential Vault read/write. 👤 Who Is This For? Sales teams automating outbound LinkedIn prospecting. Recruiters sourcing candidates without manual scraping. Marketing ops looking to enrich CRM with accurate profile data. 📈 Benefits & Use Cases Efficiency**: Reduces hours of manual search and data entry to seconds. Accuracy**: Filters out non-LinkedIn links and ensures high-quality results. Scalability**: Handle multiple prospect requests concurrently via chat or API. Integration**: Easily hook into CRMs or email sequencers downstream. Workflow created and verified by Miquel Colomer https://www.linkedin.com/in/miquelcolomersalas/ and N8nHackers https://n8nhackers.com