by explorium
Automatically enrich prospect data from HubSpot using Explorium and create leads in Salesforce This n8n workflow streamlines the process of enriching prospect information by automatically pulling data from HubSpot, processing it through Explorium's AI-powered tools, and creating new leads in Salesforce with enhanced prospect details. Credentials Required To use this workflow, set up the following credentials in your n8n environment: HubSpot Type**: App Token (or OAuth2 for broader compatibility) Used for**: triggering on new contacts, fetching contact data Explorium API Type**: Generic Header Auth Header**: Authorization Value**: Bearer YOUR_API_KEY Get explorium api key Salesforce Type**: OAuth2 or Username/Password Used for**: creating new lead records Go to Settings → Credentials, create these three credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: HubSpot Trigger This node listens for real-time events from the connected HubSpot account. Once triggered, the node passes metadata about the event to the next step in the flow. Node 2: HubSpot This node fetches contact details from HubSpot after the trigger event. Credential**: Connected using a HubSpot App Token Resource**: Contact Operation**: Get Contact Return All**: Disabled This node retrieves the full contact details needed for further processing and enrichment. Node 3: Match prospect This node sends each contact's data to Explorium's AI-powered prospect matching API in real time. Method**: POST Endpoint**: https://api.explorium.ai/v1/prospects/match Authentication**: Generic Header Auth (using a configured credential) Headers**: Content-Type: application/json The request body is dynamically built from contact data, typically including: full_name, company_name, email, phone_number, linkedin. These fields are matched against Explorium's intelligence graph to return enriched or validated profiles. Response Output: total_matches, matched_prospects, and a prospect_id. Each response is used downstream to enrich, validate, or create lead information. Node 4: Filter This node filters the output from the Match prospect step to ensure that only valid, matched results continue in the flow. Only records that contain at least one matched prospect with a non-null prospect_id are passed forward. Status: Currently deactivated (as shown by the "Deactivate" label) Node 5: Extract Prospect IDs from Matched Results This node extracts all valid prospect_id values from previously matched prospects and compiles them into a flat array. It loops over all matched items, extracts each prospect_id from the matched_prospects array and returns a single object with an array of all prospect_ids. Node 6: Explorium Enrich Contacts Information This node performs bulk enrichment of contacts by querying Explorium with a list of matched prospect_ids. Node Configuration: Method**: POST Endpoint**: https://api.explorium.ai/v1/prospects/contacts_information/bulk_enrich Authentication**: Header Auth (using saved credentials) Headers**: "Content-Type": "application/json", "Accept": "application/json" Returns enriched contact information, such as: emails**: professional/personal email addresses phone_numbers**: mobile and work numbers professions_email, **professional_email_status, mobile_phone Node 7: Explorium Enrich Profiles This additional enrichment node provides supplementary contact data enhancement, running in parallel with the primary enrichment process. Node 8: Merge This node combines multiple data streams from the parallel enrichment processes into a single output, allowing you to consolidate data from different Explorium enrichment endpoints. The "combine" setting indicates it will merge the incoming data streams rather than overwriting them. Node 9: Code - flatten This custom code node processes and transforms the merged enrichment data before creating the Salesforce lead. It can be used to: Flatten nested data structures Format data according to Salesforce field requirements Apply business logic or data validation Map Explorium fields to Salesforce lead properties Handle data type conversions Node 10: Salesforce This final node creates new leads in Salesforce using the enriched data returned by Explorium. Credential**: Salesforce OAuth2 or Username/Password Resource**: Lead Operation**: Create Lead The node creates new lead records with enriched information including contact details, company information, and professional data obtained through the Explorium enrichment process. Workflow Flow Summary Trigger: HubSpot webhook triggers on new/updated contacts Fetch: Retrieve contact details from HubSpot Match: Find prospect matches using Explorium Filter: Keep only successfully matched prospects (currently deactivated) Extract: Compile prospect IDs for bulk enrichment Enrich: Parallel enrichment of contact information through multiple Explorium endpoints Merge: Combine enrichment results Transform: Flatten and prepare data for Salesforce (Code node) Create: Create new lead records in Salesforce This workflow ensures comprehensive data enrichment while maintaining data quality and providing a seamless integration between HubSpot prospect data and Salesforce lead creation. The parallel enrichment structure maximizes data collection efficiency before creating high-quality leads in your CRM system.
by Miquel Colomer
🎯 Precision Prospecting: Automate LinkedIn Lead Gen with n8n & Bright Data 📝 Overview This workflow turns n8n into an AI-powered prospector, automatically searching Google for LinkedIn profiles, scraping profile data via Bright Data, and summarizing key details. Ideal for sales and recruitment teams seeking targeted lead lists without manual research. 🎥 Workflow in Action Want to see this workflow in action? You have a chat window output below: 🔑 Key Features AI Chat Trigger**: Start prospecting via conversational prompts. Contextual Memory**: Retains the last 20 messages for coherent dialogue. Automated Google Search**: Generates site-restricted queries and fetches the top result. Bright Data Scraping**: Synchronously scrapes LinkedIn profile details by URL. Intelligent Filtering**: Extracts only valid LinkedIn profile links. Limit Control**: Returns a single, most relevant profile per request. LLM Summary**: Uses GPT-4o-mini to interpret and present scraped data. 🚀 How It Works (Step-by-Step) Prerequisites: n8n ≥ v1.0 with community nodes: install n8n-nodes-brightdata (not verified community node). API credentials: OpenAI, Bright Data (web unlocker zone “web\_unlocker1”). Webhook endpoint for chat trigger. Node Configuration: When chat message received (chatTrigger): Fires on user prompt. Simple Memory1 (memoryBufferWindow): Stores the last 20 chat messages. AI Prospector Agent (agent): Orchestrates search logic. Get 1 Google Result (brightData): Performs a Google search with site:linkedin.com/in. Get Links from Body (html): Extracts all `` hrefs from the search result page. Extract Links (splitOut): Splits out individual link entries. Filter only LinkedIn Profiles (filter): Ensures the URL contains “linkedin.com/” and starts with “https\://”. Limit (limit): Restricts output to the first valid profile URL. Search LinkedIn URI (toolWorkflow): Passes the URL to a secondary workflow to fetch the first link. Get LinkedIn Profile Data (brightDataTool): Scrapes the profile JSON. OpenAI Chat Model (lmChatOpenAi): Summarizes and formats the scraped data. Workflow Logic: User asks for a person by company & name, company & position, or LinkedIn URL. Agent builds a Google query (e.g., site:linkedin.com/in bright data cmo) and calls “Get 1 Google Result.” Extracted links are filtered and limited to the top valid profile. If user provided a direct LinkedIn URL, Agent skips search and scrapes immediately. Scraped profile JSON is passed to GPT-4o-mini to generate a concise summary. Testing & Optimization: Trigger via Execute Workflow for dry runs. Inspect intermediate node outputs in n8n’s Execution panel. Adjust maxIterations or memory window length for performance. Tune Bright Data zone or country settings to optimize scraping speed. Deployment & Monitoring: Activate the workflow and expose its webhook URL. Use n8n’s built-in Alerts or external monitoring (e.g., Slack notifications) on failures. Rotate credentials via n8n’s Credential Vault when needed. Version-control workflow via duplicates or Git-backed n8n instances. ✅ Pre-requisites OpenAI Account**: API key for GPT-4o-mini. Bright Data Account**: Zone “web\_unlocker1” and dataset gd_l1viktl72bvl7bjuj0. n8n Version**: v1.0+ with community nodes installed. Permissions**: Webhook access, Credential Vault read/write. 👤 Who Is This For? Sales teams automating outbound LinkedIn prospecting. Recruiters sourcing candidates without manual scraping. Marketing ops looking to enrich CRM with accurate profile data. 📈 Benefits & Use Cases Efficiency**: Reduces hours of manual search and data entry to seconds. Accuracy**: Filters out non-LinkedIn links and ensures high-quality results. Scalability**: Handle multiple prospect requests concurrently via chat or API. Integration**: Easily hook into CRMs or email sequencers downstream. Workflow created and verified by Miquel Colomer https://www.linkedin.com/in/miquelcolomersalas/ and N8nHackers https://n8nhackers.com
by Miquel Colomer
📝 Overview This workflow transforms n8n into a smart real-estate concierge by combining an AI chat interface with Bright Data’s marketplace datasets. Users interact via chat to specify city, price, bedrooms, and bathrooms—and receive a curated list of three homes for sale, complete with images and briefings. 🎥 Workflow in Action Want to see this workflow in action? Play the video 🔑 Key Features AI-Powered Chat Trigger:** Instantly start conversations using LangChain’s Chat Trigger node. Contextual Memory:** Retain up to 30 recent messages for coherent back-and-forth. Bright Data Integration:** Dynamically filter “FOR\_SALE” properties by city, price, bedrooms, and bathrooms (limit = 3). Automated Snapshot Retrieval:** Poll for dataset readiness and fetch full snapshot content. HTML-Formatted Output:** Present results as a ` of ` items, embedding property images. 🚀 How It Works (Step-by-Step) Prerequisites: n8n ≥ v1.0 Community nodes: install n8n-nodes-brightdata (the unverified community node) API credentials: OpenAI, Bright Data Webhook endpoint to receive chat messages Node Configuration: Chat Trigger: Listens for incoming chat messages; shows a welcome screen. Memory Buffer: Stores the last 30 messages for context. OpenAI Chat Model: Uses GPT-4o-mini to interpret user intent. Real Estate AI Agent: Orchestrates filtering logic, calls tools, and formats responses. Bright Data “Filter Dataset” Tool: Applies user-defined filters plus homeStatus = FOR_SALE. Wait & Recover Snapshot: Polls until snapshot is ready, then fetches content. Get Snapshot Content: Converts raw JSON into a structured list. Workflow Logic: User sends search criteria → Agent validates inputs. Agent invokes “Filter Dataset” once all filters are present. Upon dataset readiness, the snapshot is retrieved and parsed. Final output rendered as a bullet list with property images. Testing & Optimization: Use the built-in Execute Workflow trigger for rapid dry runs. Inspect node outputs in n8n’s UI; adjust filter defaults or snapshot limits. Tune OpenAI model parameters (e.g., maxIterations) for faster responses. Deployment & Monitoring: Activate the main workflow and expose its webhook URL. Monitor executions in the “Executions” panel; set up alerts for errors. Archive or duplicate workflows as needed; update credentials via credential manager. ✅ Pre-requisites Bright Data Account:** API key for marketplaceDataset. OpenAI Account:** Access to GPT-4o-mini model. n8n Version:** v1.0 or later with community node support. Permissions:** Webhook access, credential vault read/write. 👤 Who Is This For? Real-estate agencies and brokers seeking to automate client queries. PropTech startups building conversational search tools. Data analysts who want on-demand property snapshots without manual scraping. 📈 Benefits & Use Cases Time Savings:** Replace manual MLS searches with an AI-driven chat. Scalability:** Serve multiple clients simultaneously via webchat or embedded widget. Consistency:** Always report exactly three properties, ensuring concise results. Engagement:** Visual listings with images boost user satisfaction and conversion. Workflow created and verified by Miquel Colomer https://www.linkedin.com/in/miquelcolomersalas/ and N8nHackers https://n8nhackers.com
by Davide
Workflow Overview This workflow automates the creation and management of a custom OpenAI Assistant for a travel agency ("Travel with us"), leveraging Google Drive for document storage. How It Works 1. Create the OpenAI Assistant Node**: OpenAI Action: Creates a custom assistant named "Travel with us" Assistant using the gpt-4o-mini model. Instructions: Respond only using the provided document (e.g., agency-specific info). Stay friendly, brief, and focused on travel-related queries. Ignore irrelevant questions politely. Credentials: Requires OpenAI API key. 2. Upload Agency Document Google Drive Node**: Action: Downloads a Google Doc as a PDF. OpenAI2 Node**: Action: Uploads the PDF to OpenAI with purpose: "assistants". Output: Generates a file_id. 3. Update the Assistant with the Document OpenAI Node**: Action: Updates the assistant to include the uploaded file. 4. Chat Interaction Chat Trigger**: Activates when a message is received ("When chat message received"). OpenAI Assistant Node**: Action: Uses the updated assistant to respond to user queries. Memory: Window Buffer Memory retains chat context for coherent conversations. Set Up Steps Prepare the Document: Store your travel agency guide in Google Drive (e.g., as a Google Doc). Update the Google Drive node with your document’s ID. Configure Credentials: Google Drive: Connect via OAuth2 (googleDriveOAuth2Api). OpenAI: Add your API key to all OpenAI nodes. Customize the Assistant: Modify the instructions in the OpenAI node to reflect your agency’s needs. Ensure the document includes FAQs, policies, and travel info. Test the Workflow: Trigger manually ("Test workflow") to create the assistant and upload the file. Send a chat message (e.g., "What are your travel packages?") to test responses. Dependencies Google Drive Account**: To store and retrieve the agency document. OpenAI API Access**: For assistant creation and file uploads.
by Giovanni Ruggieri
Send Webflow form data to Google Sheets 🚀 How It Works This workflow connects your Webflow form submissions to Google Sheets with style and efficiency. Here's the magic in action: Form Submitted 🎉 – A Webflow form submission triggers the flow. Fields Prepared 🛠️ – Data gets organized, and a submission date is added. Data Logged 📋 – Your data finds a cozy new home in Google Sheets. Set-Up Steps ⏱️ Estimated Time: 5-10 minutes Connect Webflow 🌐 – Hook up your Webflow account and disable legacy APIs (instructions inside the workflow). Set Up Google Sheets 🧾 – Choose your spreadsheet and authenticate. You're Ready! 💫 – Let the workflow take it from here. Why You'll Love It ❤️ No More Manual Copy-Paste:** Save time and keep data clean. Smart Field Mapping:** Automatically creates columns and fits your data. Helpful Notes Along the Way:** Friendly tips guide you through setup like a pro. Enjoy the smooth ride from Webflow to Google Sheets! 🚀
by Jaruphat J.
Who’s it for This template is perfect for content creators, AI enthusiasts, marketers, and developers who want to automate the generation of cinematic videos using Google Vertex AI’s Veo 3 model. It’s also ideal for anyone experimenting with generative AI for video using n8n. What it does This workflow: Accepts a text prompt and a GCP access token via form. Sends the prompt to the Veo 3 (preview model) using Vertex AI’s predictLongRunning endpoint. Waits for the video rendering to complete. Fetches the final result and converts the base64-encoded video to a file. Uploads the resulting .mp4 to your Google Drive. Output How to set up Enable Vertex AI API in your GCP project: https://console.cloud.google.com/marketplace/product/google/aiplatform.googleapis.com Authenticate with GCP using Cloud Shell or local terminal: gcloud auth login gcloud config set project [YOUR_PROJECT_ID] gcloud auth application-default set-quota-project [YOUR_PROJECT_ID] gcloud auth print-access-token Copy the token and use it in the form when running the workflow. ⚠️ This token lasts ~1 hour. Regenerate as needed. Connect your Google Drive OAuth2 credentials to allow file upload. Import this workflow into n8n and execute it via form trigger. Requirements n8n (v1.94.1+)** A Google Cloud project with: Vertex AI API enabled Billing enabled A way to get Access Token A Google Drive OAuth2 credential connected to n8n How to customize the workflow You can modify the in the HTTP node to match your use case. Replace the Google Drive upload node with alternatives like Dropbox, S3, or YouTube upload. Extend the workflow to add subtitles, audio dubbing, or LINE/Slack alerts. Step-by-step for each major node: Prompt Input → Vertex Predict → Wait → Fetch Result → Convert to File → Upload Best Practices Followed No hardcoded API tokens Secure: GCP token is input via form, not stored in workflow All nodes are renamed with clear purpose All editable config grouped in Set node External References GCP Veo API Docs: https://cloud.google.com/vertex-ai/docs/generative-ai/video/overview Disclaimer This workflow uses official Google Cloud APIs and requires a valid GCP project. Access token should be generated securely using gcloud CLI. Do not embed tokens in the workflow itself. Notes on GCP Access Token To use the Vertex AI API in n8n securely: Run the following on your local machine or GCP Cloud Shell: gcloud auth login gcloud config set project your-project-id gcloud auth print-access-token Paste the token in the workflow form field when submitting. Do not hardcode the token into HTTP nodes or Set nodes — input it each time or use a secure credential vault.
by Angel Menendez
Temporary solution using the undocumented REST API for backups using Google drive. Please note that there are issues with this workflow. It does not support versioning, so please know that it will create multiple copies of the workflows so if you run this daily it will make the folder grow quickly. Once I figure out how to version in Gdrive I'll update it here.
by Harshil Agrawal
This workflow demonstrates how to can use Redis to implement rate limits to your API. The workflow uses the incoming API key to uniquely identify the user and use it as a key in Redis. Every time a request is made, the value is incremented by one, and we check for the threshold using the IF node. Duplicate the following Airtable to try out the workflow: https://airtable.com/shraudfG9XAvqkBpF
by rpshu
-- Disclaimer: This template is mainly made for self-hosted users who can reach CSV files in their file system. For Cloud users, just replace the first few nodes with your file system of choice, like Google Drive or Dropbox -- How to automatically import CSV files into postgres 1、project description This workflow demonstrates how CSV file can be automatically imported into existing PostgreSQL database. Before running the workflow please make sure you have a file on the server: /tmp/t1.csv The name of the test database is db01, and you can replace it. then create table t1 create table t1(id int,name varchar(10)); And the content of the file is the following: |id|name| |-|-|-| |1|a| |2|b| |3|c| 2、Other If you want to import a custom csv file, please refer to the following methods. 2.1、Create a table in the database SQL Commands: https://www.postgresql.org/docs/current/sql-createtable.html 2.2、Upload csv file Upload csv file to N8N server and make sure it can be read.
by Lucas Walter
AI News Scraping System This n8n workflow automates the process of pulling in breaking AI-related headlines from curated RSS feeds, scraping their full content, and saving readable Markdown versions directly to Google Drive. Use cases include: Creating a personal newsletter curation system Automating blog post research workflows Archiving news content for later summarization or AI use How it Works Scheduled Triggers The workflow runs every 3–4 hours using multiple Schedule Trigger nodes. Each trigger targets a different news source (e.g., Google News, OpenAI Blog, Hugging Face, etc.). Fetch and Parse Feeds RSS feeds are fetched via the HTTP Request node. Items from the feed are split into individual entries using the Split Out node. Scrape Article Content Each article URL is sent to the Firecrawl API with a prompt to extract only the main content in Markdown. The scraping skips navigation, headers, footers, and ads. Convert and Save The extracted Markdown is converted into a .md file using the Convert to File node. The file is then uploaded to a Google Drive folder. Good to Know This workflow uses the Firecrawl API for web scraping. Be sure to configure a Generic HTTP Header credential with your API key. Output files are saved in Markdown format You can add more Schedule Trigger + HTTP Request pairs to extend this workflow to additional feeds. Requirements Firecrawl API account for scraping Google Drive account (OAuth2 credentials must be configured in n8n) n8n instance (self-hosted or cloud) Customization Ideas Replace or extend RSS feeds with sources relevant to your niche Load up scraped news stories into a prompt to create new content like TikToks and Reels Add a summarization step using an LLM like GPT or Claude Send the Markdown files to Notion, Slack, or a blog CMS Example Feeds | Feed Name | URL | |------------------|----------------------------------------------------------------------| | Google News (AI) | https://rss.app/feeds/v1.1/AkOariu1C7YyUUMv.json | | OpenAI Blog | https://rss.app/feeds/v1.1/xNVg2hbY14Z7Gpva.json | | Hugging Face | https://rss.app/feeds/v1.1/sgHcE2ehHQMTWhrL.json |
by Manu
For every release on GitHub this workflow will create an issue on GitLab. Copy workflow to your n8n Fill in missing fields (credentials & repo names) Based on Cron node to be able to track github repos you're not a member of (as you won't be able to create a webhook). If you do own the repo, you could replace Cron & GH node with a GitHub Trigger.
by Gavin
This Workflow does a HTTPs request to ConnectWise Manage through their REST API. It will pull all tickets in the "New" status or whichever status you like, and notify your dispatch team/personnel whenever a new ticket comes in using Microsoft Teams. Video Explanation https://youtu.be/yaSVCybSWbM