by Harshil Agrawal
This workflow allows you to compress binary files to zip format. HTTP Request node: The workflow uses the HTTP Request node to fetch files from the internet. If you want to fetch files from your local machine, replace it with the Read Binary File or Read Binary Files node. Compression node: The Compression node compresses the file into a zip. If you want to compress the files to gzip, then select the gzip format instead. Based on your use-case, you may want to write the files to your disk or upload it to Google Drive or Box. If you want to write the compressed file to your disk, replace the Dropbox node with the Write Binary File node, or if you want to upload the file to a different service, use the respective node.
by chaufnet
Webhook to report through Mailgun phishing websites to Steam and CloudFlare (if the domain is on CloudFlare) You have to set the Credentials for webhook and Mailgun. You have to set the email from for Mailgun. This assumes it is running in n8n's Docker image where bind-tools is not readily available but installable.
by Raquel Giugliano
This workflow automates currency rate uploads into SAP Business One via Service Layer, using flexible input sources such as JSON (API), SQL Server, Google Sheets, or manual values. It leverages logic branching, AI validation, and logging for complete control and traceability. ++⚙️ HOW IT WORKS:++ 🔹 1. Receive Data via Webhook The workflow listens on the endpoint /formulario-datos via HTTP POST. The request body should include: origen: one of JSON, SQL, GoogleSheets, or Manual Depending on the value, the flow branches accordingly. 🔹 2. Authenticate with SAP Business One A POST request is sent to SAP B1’s Login endpoint. A session cookie (B1SESSION) is retrieved and used in all subsequent API calls. 🔹 3. Switch by Origin The flow branches into four processing paths based on origen: JSON: The payload is normalized using OpenAI to extract an array of rates. Each rate is sent to SAP individually after parsing. SQL: The SQL query provided in the payload is executed on a connected Microsoft SQL Server. The results are checked by AI to validate the date format. If valid, rates are sent to SAP. GoogleSheets: Rates are pulled from a connected spreadsheet. Each entry is sent to SAP in sequence. Manual: Uses currency, rate, and rateDate directly from the webhook payload. Sends the result directly to SAP. 🔹 4. AI-Powered Enhancements (Optional but enabled) Normalize JSON: Uses OpenAI (LangChain node) to convert any messy structure into a uniform array under the key rate. Date Formatting: Another OpenAI call ensures RateDate is in yyyyMMdd format (required by SAP), converting from ISO, timestamp, or other formats. 🔹 5. Send to SAP Business One (Service Layer) All paths send a POST request to: /SBOBobService_SetCurrencyRate With a payload such as: { "Currency": "USD", "Rate": "0.92", "RateDate": "20250612" } 🔹 6. Log Results All success/failure results are appended to a Google Sheets log (LOGS_N8N) The log includes method, URL, sent payload, status code, and message. ++🛠 SETUP STEPS:++ 1️⃣ Create Required Credentials: Go to Credentials > + New Credential and configure: SAP Business One (Service Layer) Type: HTTP Request Auth or Token Base URL: https://<your-host>:50000/b1s/v1/ Provide Username, Password, and CompanyDB via variables or fields Google Sheets OAuth2 connection to a Google account with access Microsoft SQL Server SQL login credentials and host OpenAI API key with access to models like GPT-4o 2️⃣ Environment Variables (Recommended) Set these variables in n8n → Settings → Variables: SAP_URL=https://<host>:50000/b1s/v1/ SAP_USER=your_username SAP_PASSWORD=your_password SAP_COMPANY_DB=your_companyDB 3️⃣ Prepare Google Sheets Sheet 1: RATE (for charging the data) Columns: Currency, Rate, RateDate Sheet 2: LOGS_N8N (to save the logs, success or failed) Columns: workflow, method, url, json, status_code, message 4️⃣ Activate and Test Deploy the webhook and grab the URL. ++✅ BONUS++ Built-in AI assistance for input validation and structure Logs all results for compliance and audit Flexible integration paths: perfect for hybrid or transitional systems
by Miquel Colomer
Do you want to avoid bounces in your Email Marketing campaigns? This workflow verifies emails using the uProc.io email verifier. You need to add your credentials (Email and API Key - real -) located at Integration section to n8n. Node "Create Email Item" can be replaced by any other supported service with email value, like Mailchimp, Calendly, MySQL, or Typeform. The "uProc" node returns a status per checked email (deliverable, undeliverable, spamtrap, softbounce,...). "If" node checks if "deliverable" status exists. If value is not present, you can mark email as invalid to discard bounces. If "deliverable" status is present, you can use email in your Email Marketing campaigns. If you need to know detailed indicators of any email, you can use the tool "Communication" > "Check Email Exists (Extended)" to get advanced information.
by Harshil Agrawal
This workflow allows you to store the output of a phantom in Airtable. This workflow uses the LinkedIn Profile Scraper phantom. Configure and launch this phantom from your Phantombuster account before executing the workflow. The workflow uses the following node: Phantombuster node: The Phantombuster node gets the output of the LinkedIn Profile Scraper phantom that ran earlier. You can select a different phantom from the Agent dropdown list, but make sure to configure the workflow accordingly. Set node: Using the Set node we are setting the data for the workflow. The data that we set in this node will be used by the next nodes in the workflow. Based on your use-case, you can modify the node. Airtable node: The Airtable node allows us to append the data in an Airtable. Based on your use-case you can replace this node with any other node. Instead of storing the data in Airtable, you can store the data in a database or Google Sheet, or send it as an email using the Send Email node, Gmail node, or Microsoft Outlook node.
by Marcel Claus-Ahrens
Instructions This automation overlays a background image with another image, making it easy to add watermarks or logos. You can use this automation to watermark your images by overlaying them with a transparent version of your logo. If you'd like to place your logo in a specific corner, feel free to adjust the position of the overlay image in the code node. How it Works Both images are downloaded, so we can process binary files (you can modify the source, tho.) We extract metadata, focusing on the dimensions of each image. The position of the overlay image is calculated (default: dead center of the background image). The two images are composited together. Limitations and Optimisation Opportunities The overlay image must be the same size or smaller than the background image for proper alignment. The overlay image does not automatically scale to match the proportions of the background image. Enjoy the workflow! ❤️ let the work flow — Workflow Automation & Development
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically tracks customer satisfaction scores across multiple platforms and surveys to help improve customer experience and identify areas for enhancement. It saves you time by eliminating the need to manually check different feedback sources and provides comprehensive satisfaction analytics. Overview This workflow automatically scrapes customer satisfaction surveys, review platforms, and feedback forms to extract satisfaction scores and sentiment data. It uses Bright Data to access various feedback platforms without being blocked and AI to intelligently analyze satisfaction trends and identify improvement opportunities. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping satisfaction surveys and review platforms without being blocked OpenAI**: AI agent for intelligent satisfaction analysis and trend identification Google Sheets**: For storing satisfaction scores and generating analytics reports How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your satisfaction tracking spreadsheet Customize: Define feedback sources and satisfaction metrics you want to monitor Use Cases Customer Experience**: Monitor satisfaction trends across all customer touchpoints Product Teams**: Identify product features that impact customer satisfaction Support Teams**: Track satisfaction scores for support interactions Management**: Get comprehensive satisfaction reporting for strategic decisions Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #customersatisfaction #satisfactionscores #brightdata #webscraping #customerexperience #n8nworkflow #workflow #nocode #satisfactiontracking #csat #nps #customeranalytics #feedbackanalysis #customerinsights #satisfactionmonitoring #experiencemanagement #customermetrics #satisfactionsurveys #feedbackautomation #customerfeedback #satisfactiondata #customerjourney #experienceanalytics #satisfactionreporting #customersentiment #experienceoptimization #satisfactiontrends #customervoice
by n8n Team
Who this template is for This template is for developers or teams who need to convert CSV data into JSON format through an API endpoint, with support for both file uploads and raw CSV text input. Use case Converting CSV files or raw CSV text data into JSON format via a webhook endpoint, with error handling and notifications. This is particularly useful when you need to transform CSV data into JSON as part of a larger automation or integration process. How this workflow works Receives POST requests through a webhook endpoint at /tool/csv-to-json Uses a Switch node to handle different input types: File uploads (binary data) Plain text CSV data JSON format data Processes the CSV data: For files: Uses the Extract From File node For raw text: Converts the text to CSV using a custom Code node that handles both comma and semicolon delimiters Aggregates the processed data and returns: Success response (200): Converted JSON data Error response (500): Error message with details In case of errors, sends notifications to a Slack error channel with execution details and a link to debug Set up steps Configure the webhook endpoint by deploying the workflow Set up Slack integration for error notifications: Update the Slack channel ID (currently set to "C0832GBAEN4") Configure OAuth2 authentication for Slack Test the endpoint using either: CURL for file uploads: bash Copy curl -X POST "https://yoururl.com/webhook-test/tool/csv-to-json" \ -H "Content-Type: text/csv" \ --data-binary @path/to/your/file.csv Or send raw CSV data as text/plain content type
by Ludwig
Using PostBin to Test Webhooks Without Changing WEBHOOK_URL How it Works Many new n8n users struggle with testing webhooks when running n8n on localhost, as external services cannot reach localhost. This workflow introduces a technique using PostBin, which provides a temporary, publicly accessible URL to receive webhook requests. Generates a temporary webhook endpoint via PostBin. Uses this endpoint in place of localhost to test webhooks. Captures and displays the incoming webhook request data. Enables debugging and iterating without modifying the WEBHOOK_URL environment variable. Set Up Steps Estimated time:** ~5–10 minutes Create a PostBin instance to generate a publicly accessible webhook URL. Copy the PostBin URL and use it as the webhook destination in n8n. Trigger the webhook from an external service or manually. Inspect the request payload in PostBin to verify data reception. (EXAMPLE) Using PostBin for Webhook Testing in a BambooHR Integration How it Works In this example, we apply the PostBin technique to a BambooHR integration. Instead of manually configuring a webhook in BambooHR, this workflow automates webhook registration using the BambooHR API. The workflow: Uses the BambooHR API to programmatically register the PostBin URL as a webhook. Retrieves the most recent webhook calls made by BambooHR to the PostBin URL. (Optional) Sends a personalized Slack message for new employees using OpenAI. Set Up Steps Estimated time:** ~15–20 minutes Set up PostBin using the steps from the first section. Log into BambooHR to generate an API key for authentication. Run the workflow to register the PostBin URL as a webhook in BambooHR via the API. Retrieve recent webhook calls from PostBin to validate data reception. (Optional) Send a Slack notification using the processed data.
by Udit Rawat
Workflow based on the following article. https://www.anthropic.com/news/contextual-retrieval This n8n automation is designed to extract, process, and store content from documents into a Pinecone vector store using context-based chunking. The workflow enhances retrieval accuracy in RAG (Retrieval-Augmented Generation) setups by ensuring each chunk retains meaningful context. Workflow Breakdown: 🔹 Google Drive - Retrieve Document: The automation starts by fetching a source document from Google Drive. This document contains structured content, with predefined boundary markers for easy segmentation. 🔹 Extract Text Content - Once retrieved, the document’s text is extracted for processing. Special section boundary markers are used to divide the text into logical sections. 🔹 Code Node - Create Context-Based Chunks: A custom code node processes the extracted text, identifying section boundaries and splitting the document into meaningful chunks. Each chunk is structured to retain its context within the entire document. 🔹 Loop Node - Process Each Chunk: The workflow loops through each chunk, ensuring they are processed individually while maintaining a connection to the overall document context. 🔹 Agent Node - Generate Context for Each Chunk: We use an Agent node powered by OpenAI’s GPT-4.0-mini via OpenRouter to generate contextual metadata for each chunk, ensuring better retrieval accuracy. 🔹 Prepend Context to Chunks & Create Embeddings - The generated context is prepended to the original chunk, creating context-rich embeddings that improve searchability. 🔹 Google Gemini - Text Embeddings: The processed text is passed through Google Gemini text-embedding-004, which converts the text into semantic vector representations. 🔹 Pinecone Vector Store - Store Embeddings: The final embeddings, along with the enriched chunk content and metadata, are stored in Pinecone, making them easily retrievable for RAG-based AI applications. Use Case: This automation enhances RAG retrieval by ensuring each chunk is contextually aware of the entire document, leading to more accurate AI responses. It’s perfect for applications that require semantic search, AI-powered knowledge management, or intelligent document retrieval. By implementing context-based chunking, this workflow ensures that LLMs retrieve the most relevant data, improving response quality and accuracy in AI-driven applications.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically gathers and analyzes feature requests from multiple sources including support tickets, user forums, and feedback platforms to help prioritize product development. It saves you time by eliminating the need to manually monitor various channels and provides intelligent feature request analysis. Overview This workflow automatically scrapes support systems, user forums, social media, and feedback platforms to collect feature requests from customers. It uses Bright Data to access various platforms without being blocked and AI to intelligently categorize, prioritize, and analyze feature requests based on frequency and user impact. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping support platforms and user forums without being blocked OpenAI**: AI agent for intelligent feature request categorization and analysis Google Sheets**: For storing feature requests and generating prioritization reports How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your feature request tracking spreadsheet Customize: Define feedback sources and feature request identification parameters Use Cases Product Management**: Prioritize roadmap items based on customer demand Development Teams**: Understand which features users need most Customer Success**: Track and respond to feature requests proactively Strategy Teams**: Make data-driven decisions about product direction Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #featurerequests #productmanagement #brightdata #webscraping #productdevelopment #n8nworkflow #workflow #nocode #roadmapping #customervoice #productinsights #featureanalysis #productfeedback #userresearch #productdata #featuretracking #productplanning #customerneeds #featurediscovery #productprioritization #featurebacklog #uservoice #productintelligence #developmentplanning #featuremonitoring #productdecisions #feedbackgathering #productautomation
by Leonardo Grigorio
Video explanation This n8n workflow helps you identify trending videos within your niche by detecting outlier videos that significantly outperform a channel's average views. It automates the process of monitoring competitor channels, saving time and streamlining content research. Included in the Workflow Automated Competitor Video Tracking Monitors videos from specified competitor channels, fetching data directly from the YouTube API. Outlier Detection Based on Channel Averages Compares each video’s performance against the channel’s historical average to identify significant spikes in viewership. Historical Video Data Management Stores video statistics in a PostgreSQL database, allowing the workflow to only fetch new videos and optimize API usage. Short Video Filtering Automatically removes short videos based on duration thresholds. Flexible Video Retrieval Fetches up to 3 months of historical data on the first run and only new videos on subsequent runs. PostgreSQL Database Integration Includes SQL queries for database setup, video insertion, and performance analysis. Configurable Outlier Threshold Focuses on videos published within the last two weeks with view counts at least twice the channel's average. Data Output for Analysis Outputs best-performing videos along with their engagement metrics, making it easier to identify trending topics. Requirements n8n installed on your machine or server A valid YouTube Data API key Access to a PostgreSQL database This workflow is intended for educational and research purposes, helping content creators gain insights into what topics resonate with audiences without manual daily monitoring.