by Jonathan
This workflow is part of an MSP collection, which is publicly available on GitHub. This workflow archives or unarchives a Clockify projects, depending on a Syncro status. Note that Syncro should be setup with a webhook via 'Notification Set for Ticket - Status was changed'. It doesn't handle merging of tickets, as Syncro doesn't support a 'Notification Set' for merged tickets, so you should change a ticket to 'Resolved' first before merging it. Prerequisites A Clockify account and credentials Nodes Webhook node triggers the workflow. IF node filters projects that don't have the status 'Resolved'. Clockify nodes get all projects that (don't) have the status 'Resolved', based on the IF route. HTTP Request nodes unarchives unresolved projects, and archives resolved projects, respectively.
by Ghaith Alsirawan
🧠 This workflow is designed for one purpose only, to bulk-upload structured JSON articles from an FTP server into a Qdrant vector database for use in LLM-powered semantic search, RAG systems, or AI assistants. The JSON files are pre-cleaned and contain metadata and rich text chunks, ready for vectorization. This workflow handles Downloading from FTP Parsing & splitting Embedding with OpenAI-embedding Storing in Qdrant for future querying JSON structure format for blog articles { "id": "article_001", "title": "reseguider", "language": "sv", "tags": ["london", "resa", "info"], "source": "alltomlondon.se", "url": "https://...", "embedded_at": "2025-04-08T15:27:00Z", "chunks": [ { "chunk_id": "article_001_01", "section_title": "Introduktion", "text": "Välkommen till London..." }, ... ] } 🧰 Benefits ✅ Automated Vector Loading Handles FTP → JSON → Qdrant in a hands-free pipeline. ✅ Clean Embedding Input Supports pre-validated chunks with metadata: titles, tags, language, and article ID. ✅ AI-Ready Format Perfect for Retrieval-Augmented Generation (RAG), semantic search, or assistant memory. ✅ Flexible Architecture Modular and swappable: FTP can be replaced with GDrive/Notion/S3, and embeddings can switch to local models like Ollama. ✅ Community Friendly This template helps others adopt best practices for vector DB feeding and LLM integration.
by Tom
This workflow shows a no code approach to creating Salesforce accounts and contacts based on data coming from Excel 365 (the online version of Microsoft Excel). For a version working with regular Excel files check out this workflow instead. To run the workflow: Make sure you have both Excel 365 and Salesforce authenticated with n8n. Have a Microsoft Excel workbook with contacts and their account names ready: Select the workbook and sheet in the Microsoft Excel node of the workflow, then configure the range to read data from: Hit the Execute Workflow button at the bottom of the n8n canvas: Here is how it works: The workflow first searches for existing Salesforce accounts by name. It then branches out depending on whether the account already exists in Salesforce or not. If an account does not exist yet, it will be created. The data is then normalised before both branches converge again. Finally the contacts are created or updated as needed in Salesforce.
by PretenderX
This template automates sending a DingTalk message on new Azure Dev Ops Pull Request Created Events. It uses a MySQL database to store mappings between Azure users and DingTalk users; so the right users get notified. Set up instructions Define the path value of ReceiveTfsPullRequestCreatedMessage Webhook node of your own, copy the webhook url to create a Azure DevOps ServiceHook that call webhook with Pull Request Created event. In order to configure the LoadDingTalkAccountMap node, you need to create a MySQL table as below: |Name|Type|Length|Key| |-|-|-|-| |TfsAccount|varchar|255| |UserName|varchar|255| |DingTalkMobile|varchar|255| You can customize the Ding Talk message content by editing the BuildDingTalkWebHookData node. Define the URL of SendDingTalkMessageViaWebHook Http Request node as your Ding Talk group chat robot webhook URL. Send test of production message from Azure DevOps to test.
by Harshil Agrawal
This workflow appends, lookup, updates, and reads data from a Google Sheet spreadsheet. Set node: The Set node is used to generate data that we want to add to Google Sheets. Depending on your use-case you might have data coming from a different source. For example, you might be fetching data from a WebHook call. Add the node that will fetch the data that you want to add to the Google Sheet. Use can then use the Set node to set the data that you want to add to the Google Sheets. Google Sheets node: This node will add the data from the Set node in a new row to the Google Sheet. You will have to enter the Spreadsheet ID and the Range to specify which sheet you want to add the data to. Google Sheets1 node: This node looks for a specific value in the Google Sheet and returns all the rows that contain the value. In this example, we are looking for the value Berlin in our Google Sheet. If you want to look for a different value, enter that value in the Lookup Value field, and specify the column in the Lookup Column field. Set1 node: The Set node sets the value of the rent by $100 for the houses in Berlin. We pass this new data to the next nodes in the workflow. Google Sheets2 node: This node will update the rent for the houses in Berlin with the new rent set in the previous node. We are mapping the rows with their ID. Depending on your use-case, you might want to map the values with a different column. To set this enter the column name in the Key field. Google Sheets3 node: This node returns the information from the Google Sheet. You can specify the columns that should get returned in the Range field. Currently, the node fetches the data for columns A to D. To fetch the data only for columns A to C set the range to A:C. This workflow can be broken down into different workflows each with its own use case. For example, we can have a workflow that appends new data to a Google Sheet, and another workflow that lookups for a certain value and returns that value. You can learn to build this workflow on the documentation page of the Google Sheets node.
by Mauricio Perera
Overview This workflow exposes an HTTP endpoint (webhook) that accepts a JSON definition of an n8n workflow, validates it, and—if everything is correct—dynamically creates that workflow in the n8n instance via its internal API. If any validation fails or the API call encounters an error, an explanatory message with details is returned. Workflow Diagram Webhook │ ▼ Validate JSON ── fails validation ──► Validation Error │ └─ passes ─► Validation Successful? │ ├─ true ─► Create Workflow ──► API Successful? ──► Success Response │ │ │ └─ false ─► API Error └─ false ─► Validation Error Step-by-Step Details 1. Webhook Type**: Webhook (POST) Path**: /webhook/create-workflow Purpose**: Expose a URL to receive a JSON definition of a workflow. Expected Input**: JSON containing the main workflow fields (name, nodes, connections, settings). 2. Validate JSON Type**: Code Node (JavaScript) Validations Performed**: Ensure that payload exists and contains both name and nodes. Verify that nodes is an array with at least one item. Check that each node includes the required fields: id, name, type, position. If missing, initialize connections, settings, parameters, and typeVersion. Output if Error**: { "success": false, "message": "<error description>" } Output if Valid**: { "success": true, "apiWorkflow": { "name": payload.name, "nodes": payload.nodes, "connections": payload.connections, "settings": payload.settings } } 3. Validation Successful? Type**: IF Node Condition**: $json.success === true Branches**: true: proceed to Create Workflow false: route to Validation Error 4. Create Workflow Type**: HTTP Request (POST) URL**: http://127.0.0.1:5678/api/v1/workflows Authentication**: Header Auth with internal credentials Body**: The apiWorkflow object generated earlier Options**: continueOnFail: true (to handle failures in the next IF) 5. API Successful? Type**: IF Node Condition**: $response.statusCode <= 299 Branches**: true: proceed to Success Response false: route to API Error 6. Success Response Type**: SET Node Output**: { "success": "true", "message": "Workflow created successfully", "workflowId": "{{ $json.data[0].id }}", "workflowName": "{{ $json.data[0].name }}", "createdAt": "{{ $json.data[0].createdAt }}", "url": "http://localhost:5678/workflow/{{ $json.data[0].id }}" } 7. API Error Type**: SET Node Output**: { "success": "false", "message": "Error creating workflow", "error": "{{ JSON.stringify($json) }}", "statusCode": "{{ $response.statusCode }}" } 8. Validation Error Type**: SET Node Output**: { "success": false, "message": "{{ $json.message }}" } Example Webhook Request curl --location --request POST 'http://localhost:5678/webhook/create-workflow' \ --header 'Content-Type: application/json' \ --data-raw '{ "name": "My Dynamic Workflow", "nodes": [ { "id": "start-node", "name": "Start", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [100, 100], "parameters": {} }, { "id": "set-node", "name": "Set", "type": "n8n-nodes-base.set", "typeVersion": 1, "position": [300, 100], "parameters": { "values": { "string": [ { "name": "message", "value": "Hello from a webhook-created workflow!" } ] } } } ], "connections": { "Start": { "main": [ [ { "node": "Set", "type": "main", "index": 0 } ] ] } }, "settings": {} }' Expected Success Response { "success": "true", "message": "Workflow created successfully", "workflowId": "abcdef1234567890", "workflowName": "My Dynamic Workflow", "createdAt": "2025-05-31T12:34:56.789Z", "url": "http://localhost:5678/workflow/abcdef1234567890" } Validation Error Response { "success": false, "message": "The 'name' field is required in the workflow" } API Error Response { "success": "false", "message": "Error creating workflow", "error": "{ ...full API response details... }", "statusCode": 401 }
by Nikan Noorafkan
🧾 Template: Extract Ad Creatives from Google’s Ads Transparency Center This n8n workflow pulls ad creatives from Google's Ads Transparency Center using SerpApi, filtered by a specific domain and region. It extracts, filters, categorizes, and exports ads into neatly formatted CSV files for easy analysis. 👤 Who’s it for? Marketing Analysts** researching competitive PPC strategies Ad Intelligence Teams** monitoring creatives from specific brands Digital Marketers** gathering visual and copy trends Journalists & Watchdogs** reviewing ad activity transparency ✅ Features Fetch creatives** using SerpApi's google_ads_transparency_center engine Filter results** to include only ads with an exact match to your target domain Categorize** by ad format: text, image, or video Export CSVs**: Generates a downloadable file for each format under the /files/ directory 🛠 How to Use Edit the “Set Domain & Region” node domain: e.g. example.com region: SerpApi numeric region code → See codes Add your SerpApi API key In the “Get Ads Page 1” node’s credentials section. Run the workflow Click "Test workflow" to initiate the process. Download your results Navigate to /files/ to find: text_{domain}_ads.csv image_{domain}_ads.csv video_{domain}_ads.csv 📌 Notes Only the first page (up to 50 creatives) is fetched; pagination is not included. Sticky Notes inside the workflow nodes offer helpful internal annotations. CSV files include creative-level details: ad copy, images, video links, etc.
by Solomon
The Stripe API does not provide custom fields in invoice or charge data. So you have to get it from the Checkout Sessions endpoint. But that endpoint is not easy for begginners. It has dictionary parameters and pagination settings. This workflows solves that problem by having a preconfigured GET request that gets all the checkout sessions from the last 7 days. It then transforms the data to make it easier to work with and allows you to filter by the custom_fields you want to get. Want to generate Stripe invoices automatically? Open 👉 this workflow . Check out my other templates https://n8n.io/creators/solomon/
by explorium
Automatically enrich prospect data from HubSpot using Explorium and create leads in Salesforce This n8n workflow streamlines the process of enriching prospect information by automatically pulling data from HubSpot, processing it through Explorium's AI-powered tools, and creating new leads in Salesforce with enhanced prospect details. Credentials Required To use this workflow, set up the following credentials in your n8n environment: HubSpot Type**: App Token (or OAuth2 for broader compatibility) Used for**: triggering on new contacts, fetching contact data Explorium API Type**: Generic Header Auth Header**: Authorization Value**: Bearer YOUR_API_KEY Get explorium api key Salesforce Type**: OAuth2 or Username/Password Used for**: creating new lead records Go to Settings → Credentials, create these three credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: HubSpot Trigger This node listens for real-time events from the connected HubSpot account. Once triggered, the node passes metadata about the event to the next step in the flow. Node 2: HubSpot This node fetches contact details from HubSpot after the trigger event. Credential**: Connected using a HubSpot App Token Resource**: Contact Operation**: Get Contact Return All**: Disabled This node retrieves the full contact details needed for further processing and enrichment. Node 3: Match prospect This node sends each contact's data to Explorium's AI-powered prospect matching API in real time. Method**: POST Endpoint**: https://api.explorium.ai/v1/prospects/match Authentication**: Generic Header Auth (using a configured credential) Headers**: Content-Type: application/json The request body is dynamically built from contact data, typically including: full_name, company_name, email, phone_number, linkedin. These fields are matched against Explorium's intelligence graph to return enriched or validated profiles. Response Output: total_matches, matched_prospects, and a prospect_id. Each response is used downstream to enrich, validate, or create lead information. Node 4: Filter This node filters the output from the Match prospect step to ensure that only valid, matched results continue in the flow. Only records that contain at least one matched prospect with a non-null prospect_id are passed forward. Status: Currently deactivated (as shown by the "Deactivate" label) Node 5: Extract Prospect IDs from Matched Results This node extracts all valid prospect_id values from previously matched prospects and compiles them into a flat array. It loops over all matched items, extracts each prospect_id from the matched_prospects array and returns a single object with an array of all prospect_ids. Node 6: Explorium Enrich Contacts Information This node performs bulk enrichment of contacts by querying Explorium with a list of matched prospect_ids. Node Configuration: Method**: POST Endpoint**: https://api.explorium.ai/v1/prospects/contacts_information/bulk_enrich Authentication**: Header Auth (using saved credentials) Headers**: "Content-Type": "application/json", "Accept": "application/json" Returns enriched contact information, such as: emails**: professional/personal email addresses phone_numbers**: mobile and work numbers professions_email, **professional_email_status, mobile_phone Node 7: Explorium Enrich Profiles This additional enrichment node provides supplementary contact data enhancement, running in parallel with the primary enrichment process. Node 8: Merge This node combines multiple data streams from the parallel enrichment processes into a single output, allowing you to consolidate data from different Explorium enrichment endpoints. The "combine" setting indicates it will merge the incoming data streams rather than overwriting them. Node 9: Code - flatten This custom code node processes and transforms the merged enrichment data before creating the Salesforce lead. It can be used to: Flatten nested data structures Format data according to Salesforce field requirements Apply business logic or data validation Map Explorium fields to Salesforce lead properties Handle data type conversions Node 10: Salesforce This final node creates new leads in Salesforce using the enriched data returned by Explorium. Credential**: Salesforce OAuth2 or Username/Password Resource**: Lead Operation**: Create Lead The node creates new lead records with enriched information including contact details, company information, and professional data obtained through the Explorium enrichment process. Workflow Flow Summary Trigger: HubSpot webhook triggers on new/updated contacts Fetch: Retrieve contact details from HubSpot Match: Find prospect matches using Explorium Filter: Keep only successfully matched prospects (currently deactivated) Extract: Compile prospect IDs for bulk enrichment Enrich: Parallel enrichment of contact information through multiple Explorium endpoints Merge: Combine enrichment results Transform: Flatten and prepare data for Salesforce (Code node) Create: Create new lead records in Salesforce This workflow ensures comprehensive data enrichment while maintaining data quality and providing a seamless integration between HubSpot prospect data and Salesforce lead creation. The parallel enrichment structure maximizes data collection efficiency before creating high-quality leads in your CRM system.
by Miquel Colomer
🎯 Precision Prospecting: Automate LinkedIn Lead Gen with n8n & Bright Data 📝 Overview This workflow turns n8n into an AI-powered prospector, automatically searching Google for LinkedIn profiles, scraping profile data via Bright Data, and summarizing key details. Ideal for sales and recruitment teams seeking targeted lead lists without manual research. 🎥 Workflow in Action Want to see this workflow in action? You have a chat window output below: 🔑 Key Features AI Chat Trigger**: Start prospecting via conversational prompts. Contextual Memory**: Retains the last 20 messages for coherent dialogue. Automated Google Search**: Generates site-restricted queries and fetches the top result. Bright Data Scraping**: Synchronously scrapes LinkedIn profile details by URL. Intelligent Filtering**: Extracts only valid LinkedIn profile links. Limit Control**: Returns a single, most relevant profile per request. LLM Summary**: Uses GPT-4o-mini to interpret and present scraped data. 🚀 How It Works (Step-by-Step) Prerequisites: n8n ≥ v1.0 with community nodes: install n8n-nodes-brightdata (not verified community node). API credentials: OpenAI, Bright Data (web unlocker zone “web\_unlocker1”). Webhook endpoint for chat trigger. Node Configuration: When chat message received (chatTrigger): Fires on user prompt. Simple Memory1 (memoryBufferWindow): Stores the last 20 chat messages. AI Prospector Agent (agent): Orchestrates search logic. Get 1 Google Result (brightData): Performs a Google search with site:linkedin.com/in. Get Links from Body (html): Extracts all `` hrefs from the search result page. Extract Links (splitOut): Splits out individual link entries. Filter only LinkedIn Profiles (filter): Ensures the URL contains “linkedin.com/” and starts with “https\://”. Limit (limit): Restricts output to the first valid profile URL. Search LinkedIn URI (toolWorkflow): Passes the URL to a secondary workflow to fetch the first link. Get LinkedIn Profile Data (brightDataTool): Scrapes the profile JSON. OpenAI Chat Model (lmChatOpenAi): Summarizes and formats the scraped data. Workflow Logic: User asks for a person by company & name, company & position, or LinkedIn URL. Agent builds a Google query (e.g., site:linkedin.com/in bright data cmo) and calls “Get 1 Google Result.” Extracted links are filtered and limited to the top valid profile. If user provided a direct LinkedIn URL, Agent skips search and scrapes immediately. Scraped profile JSON is passed to GPT-4o-mini to generate a concise summary. Testing & Optimization: Trigger via Execute Workflow for dry runs. Inspect intermediate node outputs in n8n’s Execution panel. Adjust maxIterations or memory window length for performance. Tune Bright Data zone or country settings to optimize scraping speed. Deployment & Monitoring: Activate the workflow and expose its webhook URL. Use n8n’s built-in Alerts or external monitoring (e.g., Slack notifications) on failures. Rotate credentials via n8n’s Credential Vault when needed. Version-control workflow via duplicates or Git-backed n8n instances. ✅ Pre-requisites OpenAI Account**: API key for GPT-4o-mini. Bright Data Account**: Zone “web\_unlocker1” and dataset gd_l1viktl72bvl7bjuj0. n8n Version**: v1.0+ with community nodes installed. Permissions**: Webhook access, Credential Vault read/write. 👤 Who Is This For? Sales teams automating outbound LinkedIn prospecting. Recruiters sourcing candidates without manual scraping. Marketing ops looking to enrich CRM with accurate profile data. 📈 Benefits & Use Cases Efficiency**: Reduces hours of manual search and data entry to seconds. Accuracy**: Filters out non-LinkedIn links and ensures high-quality results. Scalability**: Handle multiple prospect requests concurrently via chat or API. Integration**: Easily hook into CRMs or email sequencers downstream. Workflow created and verified by Miquel Colomer https://www.linkedin.com/in/miquelcolomersalas/ and N8nHackers https://n8nhackers.com
by Miquel Colomer
📝 Overview This workflow transforms n8n into a smart real-estate concierge by combining an AI chat interface with Bright Data’s marketplace datasets. Users interact via chat to specify city, price, bedrooms, and bathrooms—and receive a curated list of three homes for sale, complete with images and briefings. 🎥 Workflow in Action Want to see this workflow in action? Play the video 🔑 Key Features AI-Powered Chat Trigger:** Instantly start conversations using LangChain’s Chat Trigger node. Contextual Memory:** Retain up to 30 recent messages for coherent back-and-forth. Bright Data Integration:** Dynamically filter “FOR\_SALE” properties by city, price, bedrooms, and bathrooms (limit = 3). Automated Snapshot Retrieval:** Poll for dataset readiness and fetch full snapshot content. HTML-Formatted Output:** Present results as a ` of ` items, embedding property images. 🚀 How It Works (Step-by-Step) Prerequisites: n8n ≥ v1.0 Community nodes: install n8n-nodes-brightdata (the unverified community node) API credentials: OpenAI, Bright Data Webhook endpoint to receive chat messages Node Configuration: Chat Trigger: Listens for incoming chat messages; shows a welcome screen. Memory Buffer: Stores the last 30 messages for context. OpenAI Chat Model: Uses GPT-4o-mini to interpret user intent. Real Estate AI Agent: Orchestrates filtering logic, calls tools, and formats responses. Bright Data “Filter Dataset” Tool: Applies user-defined filters plus homeStatus = FOR_SALE. Wait & Recover Snapshot: Polls until snapshot is ready, then fetches content. Get Snapshot Content: Converts raw JSON into a structured list. Workflow Logic: User sends search criteria → Agent validates inputs. Agent invokes “Filter Dataset” once all filters are present. Upon dataset readiness, the snapshot is retrieved and parsed. Final output rendered as a bullet list with property images. Testing & Optimization: Use the built-in Execute Workflow trigger for rapid dry runs. Inspect node outputs in n8n’s UI; adjust filter defaults or snapshot limits. Tune OpenAI model parameters (e.g., maxIterations) for faster responses. Deployment & Monitoring: Activate the main workflow and expose its webhook URL. Monitor executions in the “Executions” panel; set up alerts for errors. Archive or duplicate workflows as needed; update credentials via credential manager. ✅ Pre-requisites Bright Data Account:** API key for marketplaceDataset. OpenAI Account:** Access to GPT-4o-mini model. n8n Version:** v1.0 or later with community node support. Permissions:** Webhook access, credential vault read/write. 👤 Who Is This For? Real-estate agencies and brokers seeking to automate client queries. PropTech startups building conversational search tools. Data analysts who want on-demand property snapshots without manual scraping. 📈 Benefits & Use Cases Time Savings:** Replace manual MLS searches with an AI-driven chat. Scalability:** Serve multiple clients simultaneously via webchat or embedded widget. Consistency:** Always report exactly three properties, ensuring concise results. Engagement:** Visual listings with images boost user satisfaction and conversion. Workflow created and verified by Miquel Colomer https://www.linkedin.com/in/miquelcolomersalas/ and N8nHackers https://n8nhackers.com
by Rajeet Nair
📖 Description 🔹 How it works This workflow introduces an AI + Human-in-the-Loop pipeline for employee timesheet management. It combines the power of Google Drive, AI (OCR + LLM), and Gmail with a human review step to ensure accuracy and compliance. AI-Powered File Discovery Scans a Google Drive folder for new or updated timesheet files (PDF, Word, Excel, Images). AI Data Extraction Uses OCR and LLM (Mistral) to intelligently read and extract structured data. Supports multiple formats: PDF, Word (DOC/DOCX), Excel (XLS/XLSX), and Image files (JPG, PNG, scanned documents). Creates clean JSON with file details and timesheet logs (date, hours worked, tasks, notes). Smart Data Formatting Converts AI output into a clear HTML summary table for easy review. Flags potential anomalies (missing hours, duplicate dates, irregular entries). Human-in-the-Loop Verification Sends an approval email via Gmail containing: File metadata AI-generated HTML summary JSON attachment of raw extracted data HR/Managers review the summary and approve/reject before final actions occur. Post-Approval Automation (optional) Approved records can be saved in a separate Google Drive folder. Employees or HR receive confirmation emails. ⚙️ Set up steps Connect Credentials Add Google Drive and Gmail credentials in n8n. Configure Mistral (or any LLM) API credentials. Configure Google Drive In the “Search files and folders” node, replace the folderId with your company’s timesheet folder ID. Customize Extraction Schema Sticky notes explain how JSON output is structured. Adapt it for your organization’s needs (e.g., overtime, project codes). Set Up Human Verification Emails Update Gmail node recipients to your HR or approval team. Customize the email body (AI summary + JSON file attached). Activate & Test Enable the workflow. Upload a sample timesheet to trigger the AI + human verification loop. ⚡ Result: A robust AI + Human-in-the-Loop workflow that reduces repetitive data entry, prevents payroll errors, and gives HR full confidence before final approval.