by Samir Saci
Tags: Supply Chain Management, Logistics, Transportation, Data Transmission Context Hey! I'm Samir, a Supply Chain Engineer and Data Scientist from Paris founder of LogiGreen Consulting We help small and medium businesses improve their logistics processes using AI, Data Analytics and Automation. > Sustainable and Efficient supply chains with N8N! 📬 For business inquiries, you can add me on Here What is an EDI Message? Electronic Data Interchange (EDI) is a standardized method of automatically transferring data between computer systems. They ensure the smooth flow of essential transactional data, such as purchase orders, invoices, shipping notices, and more. For instance, a manufacturing company can receive purchase orders from a retailer via EDI. However, they need complex integration for the transmission and processing of the messages. Who is this template for? This workflow template is designed for small companies that cannot connect to their customers and need to manually process the EDI messages received. How does it work? This workflow uses a Gmail Trigger that analyzes all the incoming emails. 📧 Gmail Trigger → Detects emails with "EDI" in the subject. 📜 Parses EDI Message → Uses a JavaScript Code Node to extract structured data. 📊 Formats the Data → Converts it into a table-friendly format. 📑 Updates Google Sheets → Automatically logs the processed orders. Prerequisite This workflow does not require any additional paying subscription. A Google Drive Account with a folder including a Google Sheet API Credentials: Google Drive API, Google Sheets API and Gmail API A Google sheet to store the shipment records. You do not need to prepare the columns. Next Steps Follow the sticky notes to set up the parameters inside each node and get ready to improve your logistics operations! 📺 Watch the Step-by-Step Guide 🎥 Check My Tutorial 🚀 Interested in applications of N8N for Logistics & Supply Chain Management? Let's connect on Linkedin Notes This template includes an example of EDI message to test the workflow. If you want to learn more about Electronic Data Interchange: 🚚 Blog Article about Electronic Data Interchange (EDI) This workflow has been created with N8N 1.82.1 Submitted: March 19th, 2025
by Davide
This workflow enables users to perform web searches directly from Telegram using the Brave search engine. By simply sending the command /brave followed by a query, the workflow retrieves search results from Brave and returns them as a Telegram message. This workflow is ideal for users who want a quick and private way to search the web without switching between apps. 🚀 This workflow is a powerful tool for automating interactions with Brave tools through Telegram, providing users with quick and easy access to information directly in their chat. Below is a breakdown of the workflow: 1. How It Works The workflow is designed to process user queries from Telegram, execute a Brave tool via the MCP Client, and send the results back to the user. Here's how it works: Telegram Trigger: The workflow starts with the Telegram Trigger node, which listens for new messages in a Telegram chat. When a message is received, the workflow checks if it starts with the command /brave. Filter Messages: The If node filters messages that start with /brave. If the message doesn't start with /brave, the workflow stops. Edit Fields: The Edit Fields node extracts the text of the message for further processing. Clean Query: The Clean Query node removes the /brave command from the message, leaving only the user's query. List Brave Tools: The List Brave Tools node retrieves the list of available tools from the MCP Client. Execute Brave Tool: The Exec Brave Tool node executes the first tool in the list using the cleaned query as input. Send Message: The Send Message node sends the result of the Brave tool execution back to the user in the Telegram chat. 2. Preliminary Steps Access to an n8n self-hosted instance and install the Community node "n8n-nodes-mcp". Please see this easy guide Get your Brave Search API Key: https://brave.com/search/api/ Telegram Bot Access Token In "List Brave Tools" create new credential as shown in this image In Environment field set this value: BRAVE_API_KEY=your-api-key 3. Set Up Steps To set up and use this workflow in n8n, follow these steps: Telegram Configuration: Set up Telegram credentials in n8n for the Telegram Trigger and Send Message nodes. Ensure the Telegram bot is authorized to read messages and send responses in the chat. MCP Client Configuration: Set up MCP Client credentials in n8n for the List Brave Tools and Exec Brave Tool nodes. Ensure the MCP Client is configured to interact with Brave tools. Test the Workflow: Send a message starting with /brave followed by a query (e.g., /brave search for AI tools) to the Telegram chat. The workflow will: Process the query. Execute the Brave tool via the MCP Client. Send the result back to the Telegram chat. Optional Customization: Modify the workflow to include additional features, such as: Adding more commands or tools. Integrating with other APIs or services for advanced use cases. Sending notifications via other channels (e.g., email, Slack) Need help customizing? Contact me for consulting and support or add me on Linkedin.
by bangank36
This workflow retrieves all Shopify Customers and saves them into a Google Sheets spreadsheet using the Shopify Admin REST API. It uses pagination to ensure all customers are collected efficiently. N8n does not have built-in actions for Customers, so I built the workflow using an HTTP Request node. How It Works This workflow uses the HTTP Request node to fetch paginated chunks manually. Shopify uses cursor-based pagination (page_info) instead of traditional page numbers. Pagination data is stored in the response headers, so we need to enable Include Response Headers and Status in the HTTP Request node. The workflow processes customer data, saves it to Google Sheets, and formats a compatible CSV for Squarespace Contacts import. This workflow can be run on demand or scheduled to keep your data up to date. Parameters You can adjust these parameters in the HTTP Request node: limit** – The number of customers per request (default: 50, max: 250). fields** – Comma-separated list of fields to retrieve. page_info** – Used for pagination; only limit and fields are allowed when paginating. 📌 Note: When you query paginated chunks with page_info, only the limit and fields parameters are allowed. Credentials Shopify API Key** – Required for authentication. Google Sheets API credentials** – Needed to insert data into the spreadsheet. Google Sheets Template Clone this spreadsheet: 📎 Google Sheets Template According to Squarespace documentation, your spreadsheet can have up to three columns and must be arranged in this order (no header): Email Address First Name (optional) Last Name (optional) Shopify Customer ID (this field will be ignored) Exporting a Compatible CSV for Squarespace Contacts This workflow also generates a CSV file that can be imported into Squarespace Contacts. How to Import the CSV to Squarespace: Open the Lists & Segments panel and click on your mailing list. Click Add Subscribers, then select Upload a list. Click Add a CSV file and select the file to import. Toggle These subscribers accept marketing to confirm permission. Preview your list, then click Import. Who Is This For? Shopify store owners** who need to export all customers to Google Sheets. Anyone looking for a flexible and scalable** Shopify customers extraction solution. Squarespace website owners** who want to bulk-create their Contacts using CSV. Explore More Templates 👉 Check out my other n8n templates
by ParquetReader
📄 Convert Parquet, Feather, ORC & Avro Files with ParquetReader This workflow allows you to upload and inspect Parquet, Feather, ORC, or Avro files via the ParquetReader API. It instantly returns a structured JSON preview of your data — including rows, schema, and metadata — without needing to write any custom code. ✅ Perfect For Validating schema and structure before syncing or transformation Previewing raw columnar files on the fly Automating QA, ETL, or CI/CD workflows Converting Parquet, Avro, Feather, or ORC to JSON ⚙️ Use Cases Catch schema mismatches before pipeline runs Automate column audits in incoming data files Enrich metadata catalogs with real-time schema detection Integrate file validation into automated workflows 🚀 How to Use This Workflow 📥 Trigger via File Upload You can trigger this flow by sending a POST request with a file using curl, Postman, or from another n8n flow. 🔧 Example (via curl): curl -X POST http://localhost:5678/webhook-test/convert \ -F "file=@converted.parquet" > Replace converted.parquet with your local file path. You can also send Avro, ORC or Feather files. 🔁 Reuse from Other Flows You can reuse this flow by calling the webhook from another n8n workflow using an HTTP Request node. Make sure to send the file as form-data with the field name file. 🔍 What This Flow Does: Receives the uploaded file via webhook (file) Sends it to https://api.parquetreader.com/parquet as multipart/form-data (field name: file) Receives parsed data (rows), schema, and metadata in JSON format 🧪 Example JSON Response from this flow { "data": [ { "full_name": "Pamela Cabrera", "email": "bobbyharrison@example.net", "age": "24", "active": "True", "latitude": "-36.1577385", "longitude": "63.014954", "company": "Carter, Shaw and Parks", "country": "Honduras" } ], "meta_data": { "created_by": "pyarrow", "num_columns": 21, "num_rows": 10, "serialized_size": 7598, "format_version": "0.12" }, "schema": [ { "column_name": "full_name", "column_type": "string" }, { "column_name": "email", "column_type": "string" }, { "column_name": "age", "column_type": "int64" }, { "column_name": "active", "column_type": "bool" }, { "column_name": "latitude", "column_type": "double" }, { "column_name": "longitude", "column_type": "double" }, { "column_name": "company", "column_type": "string" }, { "column_name": "country", "column_type": "string" } ] } 🔐 API Info Authentication: None required Supported formats: .parquet, .avro, .orc, .feather Free usage: No signup needed; API is currently open to the public Limits: Usage and file size limits may apply in the future (TBD)
by Jonathan
This workflow is part of an MSP collection, which is publicly available on GitHub. This workflow archives or unarchives a Clockify projects, depending on a Syncro status. Note that Syncro should be setup with a webhook via 'Notification Set for Ticket - Status was changed'. It doesn't handle merging of tickets, as Syncro doesn't support a 'Notification Set' for merged tickets, so you should change a ticket to 'Resolved' first before merging it. Prerequisites A Clockify account and credentials Nodes Webhook node triggers the workflow. IF node filters projects that don't have the status 'Resolved'. Clockify nodes get all projects that (don't) have the status 'Resolved', based on the IF route. HTTP Request nodes unarchives unresolved projects, and archives resolved projects, respectively.
by Tom
This is the workflow powering the n8n demo shown at StrapiConf 2022. The workflow searches matching Tweets every 30 minutes using the Interval node and listens to Form submissions using the Webhook node. Sentiment analysis is handled by Google using the Google Cloud Natural Language node before the result is stored in Strapi using the Strapi node. (These were originally two separate workflows that have been combined into one to simplify sharing.)
by Airtop
About The LinkedIn Profile Discovery Automation Are you tired of manually searching for LinkedIn profiles or paying expensive data providers for often outdated information? If you spend countless hours trying to find accurate LinkedIn URLs for your prospects or candidates, this automation will change your workflow forever. Just give this workflow the information you have about a contact, and it will automatically augment it with a LinkedIn profile. How to find a LinkedIn Profile Link In this guide, you'll learn how to automate LinkedIn profile link discovery using Airtop's built-in node in n8n. Using this automation, you'll have a fully automated workflow that saves you hours of manual searching while providing accurate, validated LinkedIn URLs. What You'll Need A free Airtop API key A Google Workspace account. If you have a Gmail account, you’re all set Estimated setup time: 10 minutes Understanding the Process This automation leverages the power of intelligent search algorithms combined with LinkedIn validation to ensure accuracy. Here's how it works: Takes your input data (name, company, etc.) and constructs intelligent search queries Utilizes Google search to identify potential LinkedIn profile URLs Validates the discovered URLs directly against LinkedIn to ensure accuracy Returns confirmed, accurate LinkedIn profile URLs Setting Up Your Automation Getting started with this automation is straightforward: Prepare Your Google Sheet Create a new Google Sheet with columns for input data (name, company, domain, etc.) Add columns for the output LinkedIn URL and validation status (see this example) Configure the Automation Connect your Google Workspace account to n8n if you haven't already Add your Airtop API credentials (Optionally) Configure your Airtop Profile and sign-in to LinkedIn in order to validate profile URL's Run Your First Test Add a few test entries to your Google Sheet Run the workflow Check the results in your output columns Customization Options While the default setup uses Google Sheets, this automation is highly flexible: Webhook Integration**: Perfect for integrating with tools like Clay, Instantly, or your custom applications Alternatives**: Replace Google Sheets with Airtable, Notion, or any other tools you already use for more robust database capabilities Custom Output Formatting**: Modify the output structure to match your existing systems Batch Processing**: Configure for bulk processing of multiple profiles Real-World Applications This automation has the potential to transform how we organizations handle profile enrichment. Recruiting Firm Success Story With this automation, a recruiting firm could save hundreds of dollars a month in data enrichment fees, achieve better accuracy, and eliminate subscription costs. They would also be able to process thousands of profiles weekly with near-perfect accuracy. Sales Team Integration A B2B sales team could integrate this automation with their CRM, automatically enriching new leads with validated LinkedIn profiles and saving their SDRs hours per week on manual research. Best Practices To maximize the accuracy of your results: Always include company information (domain or company name) with your search queries Use full names rather than nicknames or initials when possible Consider including location data for more accurate results with common names Implement rate limiting to respect LinkedIn's usage guidelines Keep your input data clean and standardized for best results Use the integrated proxy to navigate more effectively through Google and LinkedIn What's Next? Now that you've automated LinkedIn profile discovery, consider exploring related automations: Automated lead scoring based on LinkedIn profile data Email finder automation using validated LinkedIn profiles Integration with your CRM for automated contact enrichment
by Airtop
About The Airtop Automation Are you tired of being shocked by unexpectedly high energy bills? With this automation using Airtop and n8n, you can take control of your daily energy costs and ensure you’re always informed. How to monitor your daily energy consumption With this automation, we’ll walk you through setting up an automation that retrieves your PG&E (Pacific Gas and Electric) energy usage data, calculates costs, and emails you the details—all without manual effort. What You’ll Need To get started, make sure you have the following: A free Airtop API Key PG&E Account Credentials - with minor adaptations, this will also work with other providers An Email Address - To receive the energy cost updates Estimated setup time: 5 minutes Understanding the Process This automation works by: Logging into your PG&E account using your credentials Navigating to your energy usage data Extracting relevant details about energy consumption and costs Emailing the daily summary directly to your inbox The automation is straightforward and ensures you have real-time insights into your energy usage, empowering you to adjust your habits and save money. Setting Up Your Automation We’ve created a step-by-step guide to help you set up this workflow. Here’s how: Insert Your Credentials: In the tools section, add your PG&E login details as variables In Airtop, add your Airtop API Key Configure your email address to receive the updates Run the Automation: Start the scenario, and watch as the automation retrieves your energy data and sends you a detailed email summary. Customization Options While the default setup works seamlessly, you can tweak it to suit your needs: Data Storage: Store energy usage data in a database for long-term tracking and analysis Visualization: Plot graphs of your energy usage trends over time for better insights Notifications: Change the automation to only send alerts on high usage instead of a daily email Real-World Applications This automation isn’t just about monitoring energy usage and taking control. Here are some practical applications: Daily Energy Management: Receive updates every morning and adjust your energy consumption based on costs Smart Home Integration: Use the data to automate appliances during off-peak hours Budgeting: Track energy expenses over weeks or months to plan your budget more effectively Happy automating!
by Tom
This workflow shows a no code approach to creating Salesforce accounts and contacts based on data coming from Excel 365 (the online version of Microsoft Excel). For a version working with regular Excel files check out this workflow instead. To run the workflow: Make sure you have both Excel 365 and Salesforce authenticated with n8n. Have a Microsoft Excel workbook with contacts and their account names ready: Select the workbook and sheet in the Microsoft Excel node of the workflow, then configure the range to read data from: Hit the Execute Workflow button at the bottom of the n8n canvas: Here is how it works: The workflow first searches for existing Salesforce accounts by name. It then branches out depending on whether the account already exists in Salesforce or not. If an account does not exist yet, it will be created. The data is then normalised before both branches converge again. Finally the contacts are created or updated as needed in Salesforce.
by Yaron Been
LinkedIn Hiring Signal Scraper — Jobs & Prospecting Using Bright Data Purpose: Discover recent job posts from LinkedIn using Bright Data's Dataset API, clean the results, and log them into Google Sheets — for both job hunting and identifying high-intent B2B leads based on hiring activity. Use Cases: Job Seekers** – Spot relevant openings filtered by role, city, and country. Sales & Prospecting** – Use job posts as buying signals. If a company is hiring for a role you support (e.g. marketers, developers, ops) — it's the perfect time to reach out and offer your services. Tools Needed: n8n Nodes:** Form Trigger HTTP Request Wait If Code Google Sheets Sticky Notes (for embedded guidance) External Services:** Bright Data (Dataset API) Google Sheets API Keys & Authentication Required: Bright Data API Key** → Add in the HTTP Request headers: Authorization: Bearer YOUR_BRIGHTDATA_API_KEY Google Sheets OAuth2** → Connect your account in n8n to allow read/write access to the spreadsheet. General Guidelines: Use descriptive names for all nodes. Include retry logic in polling to avoid infinite loops. Flatten nested fields (like job_poster and base_salary). Strip out HTML tags from job descriptions for clean output. Things to be Aware Of: Bright Data snapshots take ~1–3 minutes — use a Wait node and polling. Form filters affect output significantly: 🔍 We recommend filtering by "Last 7 days" or "Past 24 hours" for fresher data. Avoid hardcoding values in the form — leave optional filters empty if unsure. Post-Processing & Outreach: After data lands in Google Sheets, you can use it to: Personalize cold emails based on job titles, locations, and hiring signals. Send thoughtful LinkedIn messages (e.g., "Saw you're hiring a CMO...") Prioritize outreach to companies actively growing in your niche. Additional Notes: 📄 Copy the Google Sheet Template: Click here to make your copy → Rename for each campaign or client. Form fields include: Job Location (city or region) Keyword (e.g., CMO, Backend Developer) Country (2-letter code, e.g., US, UK) This workflow gives you a competitive edge — 📌 For candidates: Be first to apply. 📌 For sellers: Be first to pitch. All based on live hiring signals from LinkedIn. STEP-BY-STEP WALKTHROUGH Step 1: Set up your Google Sheet Open this template Go to File → Make a copy You'll use this copy as the destination for the scraped job posts Step 2: Fill out the Input Form in n8n The form allows you to define what kind of job posts you want to scrape. Fields: Job Location** → e.g. New York, Berlin, Remote Keyword** → e.g. CMO, AI Architect, Ecommerce Manager Country Code (2-letter)** → e.g. US, UK, IL 💡 Pro Tip: For best results, set the filter inside the workflow to: time_range = "Past 24 hours" or "Last 7 days" This keeps results relevant and fresh. Step 3: Trigger Bright Data Snapshot The workflow sends a request to Bright Data with your input. Example API Call Body: [ { "location": "New York", "keyword": "Marketing Manager", "country": "US", "time_range": "Past 24 hours", "job_type": "Part-time", "experience_level": "", "remote": "", "company": "" } ] Bright Data will start preparing the dataset in the background. Step 4: Wait for the Snapshot to Complete The workflow includes a Wait Node and Polling Loop that checks every few minutes until the data is ready. You don't need to do anything here — it's all automated. Step 5: Clean Up the Results Once Bright Data responds with the full job post list: ✔️ Nested fields like job_poster and base_salary are flattened ✔️ HTML in job descriptions is removed ✔️ Final data is formatted for export Step 6: Export to Google Sheets The final cleaned list is added to your Google Sheet (first tab). Each row = one job post, with columns like: job_title, company_name, location, salary_min, apply_link, job_description_plain Step 7: Use the Data for Outreach or Research Example for Job Seekers: You search for: Location: Berlin Keyword: Product Designer Country: DE Time range: Past 7 days Now you've got a live list of roles — with salary, recruiter info, and apply links. → Use it to apply faster than others. Example for Prospecting (Sales / SDR): You search for: Location: London Keyword: Growth Marketing Country: UK And find companies hiring growth marketers. → That's your signal to offer help with media buying, SEO, CRO, or your relevant service. Use the data to: Write personalized cold emails ("Saw you're hiring a Growth Marketer…") Start warm LinkedIn outreach Build lead lists of companies actively expanding in your niche API Credentials Required: Bright Data API Key** Used in HTTP headers: Authorization: Bearer YOUR_BRIGHTDATA_API_KEY Google Sheets OAuth2** Allows n8n to read/write to your spreadsheet Adjustments & Customization Tips: Modify the HTTP Request body to add more filters (e.g. job_type, remote, company) Increase or reduce polling wait time depending on Bright Data speed Add scoring logic to prioritize listings based on title or location Final Notes: 📄 Google Sheet Template: Make your copy here ⚙️ Bright Data Dataset API: Visit BrightData.com 📬 Personalization works best when you act quickly. Use the freshest data to reach out with context — not generic pitches. This workflow turns LinkedIn job posts into sales insights and job leads. All in one click. Fully automated. Ready for your next move.
by Preston Zeller
How It Works This workflow automates the real estate lead qualification process by leveraging property data from BatchData. The automation follows these steps: When a new lead is received through your CRM webhook, the workflow captures their address information It then makes an API call to BatchData to retrieve comprehensive property details A sophisticated scoring algorithm evaluates the lead based on property characteristics like: Property value (higher values earn more points) Square footage (larger properties score higher) Property age (newer constructions score higher) Investment status (non-owner occupied properties earn bonus points) Lot size (larger lots receive additional score) Leads are automatically classified into categories (high-value, qualified, potential, or unqualified) The workflow updates your CRM with enriched property data and qualification scores High-value leads trigger immediate follow-up tasks for your team Notifications are sent to your preferred channel (Slack in this example) The entire process happens within seconds of receiving a new lead, ensuring your sales team can prioritize the most valuable opportunities immediately.. Who It's For This workflow is perfect for: Real estate agents and brokers looking to prioritize high-value property leads Mortgage lenders who need to qualify borrowers based on property assets Home service providers (renovators, contractors, solar installers) targeting specific property types Property investors seeking specific investment opportunities Real estate marketers who want to segment audiences by property value Home insurance agents qualifying leads based on property characteristics Any business that bases lead qualification on property details will benefit from this automated qualification system. About BatchData BatchData is a comprehensive property data provider that offers detailed information about residential and commercial properties across the United States. Their API provides: Property valuation and estimates Ownership information Property characteristics (size, age, bedrooms, bathrooms) Tax assessment data Transaction history Occupancy status (owner-occupied vs. investment) Lot details and dimensions By integrating BatchData with your lead management process, you can automatically verify and enrich leads with accurate property information, enabling more intelligent lead scoring and routing based on actual property characteristics rather than just contact information. This workflow demonstrates how to leverage BatchData's property API to transform your lead qualification process from manual research into an automated, data-driven system that ensures high-value leads receive immediate attention.
by PretenderX
This template automates sending a DingTalk message on new Azure Dev Ops Pull Request Created Events. It uses a MySQL database to store mappings between Azure users and DingTalk users; so the right users get notified. Set up instructions Define the path value of ReceiveTfsPullRequestCreatedMessage Webhook node of your own, copy the webhook url to create a Azure DevOps ServiceHook that call webhook with Pull Request Created event. In order to configure the LoadDingTalkAccountMap node, you need to create a MySQL table as below: |Name|Type|Length|Key| |-|-|-|-| |TfsAccount|varchar|255| |UserName|varchar|255| |DingTalkMobile|varchar|255| You can customize the Ding Talk message content by editing the BuildDingTalkWebHookData node. Define the URL of SendDingTalkMessageViaWebHook Http Request node as your Ding Talk group chat robot webhook URL. Send test of production message from Azure DevOps to test.