by Recrutei Automações
What This Workflow Does This workflow automates the candidate nurturing process, solving the common problem of candidates losing interest or "ghosting" after an application. It keeps them engaged and informed by sending a personalized, multi-channel (WhatsApp & Gmail) sequence of follow-up messages over their first week. The automation triggers when a new candidate is added to your ATS (e.g., via a Recrutei webhook). It then uses AI to generate a custom 3-part message (for Day 1, Day 3, and Day 7) tailored to the candidate's age and the specific job they applied for, ensuring a professional and empathetic experience that strengthens your employer brand. How it Works Trigger: A Webhook node captures the new candidate data from your Applicant Tracking System (ATS) or form. Data Preparation: Two Code nodes clean the incoming data. The first (Separating information) extracts key fields and formats the phone number. The second (Extract age) calculates the candidate's age from their birthday to be used by the AI. AI Content Generation: The workflow sends the candidate's details (name, age, job title) to an AI model (AI Recruitment Assistant). The AI has a detailed system prompt to generate three distinct messages for Day 1 (Thank You), Day 3 (Friendly Reminder), and Day 7 (Final Reinforcement), adapting its tone based on the candidate's age. Split Messages: A Code node (Separating messages per days) receives the single text block from the AI and splits it into three separate variables (day1, day3, day7). Day 1 Send: The workflow immediately sends the day1 message via both Gmail and WhatsApp (configured for Evolution API). Day 3 Send: A "Wait" node pauses the workflow for 2 days, after which it sends the day3 message. Day 7 Send: Another "Wait" node pauses for 4 more days, then sends the final day7 message, completing the 7-day nurturing sequence. Setup Instructions This workflow is plug-and-play once you configure the following 5 steps: Webhook Node: Copy the Test URL from the Webhook node and configure it in your ATS (e.g., Recrutei) or form builder to trigger whenever a new candidate is added. Run one test submission to make the data structure visible to n8n. AI Credentials: In the AI Recruitment Assistant node, select or create your OpenAI API credential. MCP Credential (Optional): If you use a Recrutei MCP, paste your endpoint URL into the MCP Recrutei node. Gmail Credentials: In all three Message Gmail nodes (Day 1, 3, 7), select or create your Gmail (OAuth2) credential. Optional: In the same nodes, go to Options and change the Sender Name from your_company to your actual company name. WhatsApp (Evolution API): This template is pre-configured for the Evolution API. In all three Message WhatsApp nodes (Day 1, 3, 7), you must: URL: Replace {server-url} and {instance} with your Evolution API details. Headers: In the "Header Parameters" section, replace your_api_key with your actual Evolution API key.
by Raquel Giugliano
This workflow automates currency rate uploads into SAP Business One via Service Layer, using flexible input sources such as JSON (API), SQL Server, Google Sheets, or manual values. It leverages logic branching, AI validation, and logging for complete control and traceability. ++⚙️ HOW IT WORKS:++ 🔹 1. Receive Data via Webhook The workflow listens on the endpoint /formulario-datos via HTTP POST. The request body should include: origen: one of JSON, SQL, GoogleSheets, or Manual Depending on the value, the flow branches accordingly. 🔹 2. Authenticate with SAP Business One A POST request is sent to SAP B1’s Login endpoint. A session cookie (B1SESSION) is retrieved and used in all subsequent API calls. 🔹 3. Switch by Origin The flow branches into four processing paths based on origen: JSON: The payload is normalized using OpenAI to extract an array of rates. Each rate is sent to SAP individually after parsing. SQL: The SQL query provided in the payload is executed on a connected Microsoft SQL Server. The results are checked by AI to validate the date format. If valid, rates are sent to SAP. GoogleSheets: Rates are pulled from a connected spreadsheet. Each entry is sent to SAP in sequence. Manual: Uses currency, rate, and rateDate directly from the webhook payload. Sends the result directly to SAP. 🔹 4. AI-Powered Enhancements (Optional but enabled) Normalize JSON: Uses OpenAI (LangChain node) to convert any messy structure into a uniform array under the key rate. Date Formatting: Another OpenAI call ensures RateDate is in yyyyMMdd format (required by SAP), converting from ISO, timestamp, or other formats. 🔹 5. Send to SAP Business One (Service Layer) All paths send a POST request to: /SBOBobService_SetCurrencyRate With a payload such as: { "Currency": "USD", "Rate": "0.92", "RateDate": "20250612" } 🔹 6. Log Results All success/failure results are appended to a Google Sheets log (LOGS_N8N) The log includes method, URL, sent payload, status code, and message. ++🛠 SETUP STEPS:++ 1️⃣ Create Required Credentials: Go to Credentials > + New Credential and configure: SAP Business One (Service Layer) Type: HTTP Request Auth or Token Base URL: https://<your-host>:50000/b1s/v1/ Provide Username, Password, and CompanyDB via variables or fields Google Sheets OAuth2 connection to a Google account with access Microsoft SQL Server SQL login credentials and host OpenAI API key with access to models like GPT-4o 2️⃣ Environment Variables (Recommended) Set these variables in n8n → Settings → Variables: SAP_URL=https://<host>:50000/b1s/v1/ SAP_USER=your_username SAP_PASSWORD=your_password SAP_COMPANY_DB=your_companyDB 3️⃣ Prepare Google Sheets Sheet 1: RATE (for charging the data) Columns: Currency, Rate, RateDate Sheet 2: LOGS_N8N (to save the logs, success or failed) Columns: workflow, method, url, json, status_code, message 4️⃣ Activate and Test Deploy the webhook and grab the URL. ++✅ BONUS++ Built-in AI assistance for input validation and structure Logs all results for compliance and audit Flexible integration paths: perfect for hybrid or transitional systems
by Marcel Claus-Ahrens
Instructions This automation overlays a background image with another image, making it easy to add watermarks or logos. You can use this automation to watermark your images by overlaying them with a transparent version of your logo. If you'd like to place your logo in a specific corner, feel free to adjust the position of the overlay image in the code node. How it Works Both images are downloaded, so we can process binary files (you can modify the source, tho.) We extract metadata, focusing on the dimensions of each image. The position of the overlay image is calculated (default: dead center of the background image). The two images are composited together. Limitations and Optimisation Opportunities The overlay image must be the same size or smaller than the background image for proper alignment. The overlay image does not automatically scale to match the proportions of the background image. Enjoy the workflow! ❤️ let the work flow — Workflow Automation & Development
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically tracks customer satisfaction scores across multiple platforms and surveys to help improve customer experience and identify areas for enhancement. It saves you time by eliminating the need to manually check different feedback sources and provides comprehensive satisfaction analytics. Overview This workflow automatically scrapes customer satisfaction surveys, review platforms, and feedback forms to extract satisfaction scores and sentiment data. It uses Bright Data to access various feedback platforms without being blocked and AI to intelligently analyze satisfaction trends and identify improvement opportunities. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping satisfaction surveys and review platforms without being blocked OpenAI**: AI agent for intelligent satisfaction analysis and trend identification Google Sheets**: For storing satisfaction scores and generating analytics reports How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your satisfaction tracking spreadsheet Customize: Define feedback sources and satisfaction metrics you want to monitor Use Cases Customer Experience**: Monitor satisfaction trends across all customer touchpoints Product Teams**: Identify product features that impact customer satisfaction Support Teams**: Track satisfaction scores for support interactions Management**: Get comprehensive satisfaction reporting for strategic decisions Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #customersatisfaction #satisfactionscores #brightdata #webscraping #customerexperience #n8nworkflow #workflow #nocode #satisfactiontracking #csat #nps #customeranalytics #feedbackanalysis #customerinsights #satisfactionmonitoring #experiencemanagement #customermetrics #satisfactionsurveys #feedbackautomation #customerfeedback #satisfactiondata #customerjourney #experienceanalytics #satisfactionreporting #customersentiment #experienceoptimization #satisfactiontrends #customervoice
by n8n Team
Who this template is for This template is for developers or teams who need to convert CSV data into JSON format through an API endpoint, with support for both file uploads and raw CSV text input. Use case Converting CSV files or raw CSV text data into JSON format via a webhook endpoint, with error handling and notifications. This is particularly useful when you need to transform CSV data into JSON as part of a larger automation or integration process. How this workflow works Receives POST requests through a webhook endpoint at /tool/csv-to-json Uses a Switch node to handle different input types: File uploads (binary data) Plain text CSV data JSON format data Processes the CSV data: For files: Uses the Extract From File node For raw text: Converts the text to CSV using a custom Code node that handles both comma and semicolon delimiters Aggregates the processed data and returns: Success response (200): Converted JSON data Error response (500): Error message with details In case of errors, sends notifications to a Slack error channel with execution details and a link to debug Set up steps Configure the webhook endpoint by deploying the workflow Set up Slack integration for error notifications: Update the Slack channel ID (currently set to "C0832GBAEN4") Configure OAuth2 authentication for Slack Test the endpoint using either: CURL for file uploads: bash Copy curl -X POST "https://yoururl.com/webhook-test/tool/csv-to-json" \ -H "Content-Type: text/csv" \ --data-binary @path/to/your/file.csv Or send raw CSV data as text/plain content type
by Ludwig
Using PostBin to Test Webhooks Without Changing WEBHOOK_URL How it Works Many new n8n users struggle with testing webhooks when running n8n on localhost, as external services cannot reach localhost. This workflow introduces a technique using PostBin, which provides a temporary, publicly accessible URL to receive webhook requests. Generates a temporary webhook endpoint via PostBin. Uses this endpoint in place of localhost to test webhooks. Captures and displays the incoming webhook request data. Enables debugging and iterating without modifying the WEBHOOK_URL environment variable. Set Up Steps Estimated time:** ~5–10 minutes Create a PostBin instance to generate a publicly accessible webhook URL. Copy the PostBin URL and use it as the webhook destination in n8n. Trigger the webhook from an external service or manually. Inspect the request payload in PostBin to verify data reception. (EXAMPLE) Using PostBin for Webhook Testing in a BambooHR Integration How it Works In this example, we apply the PostBin technique to a BambooHR integration. Instead of manually configuring a webhook in BambooHR, this workflow automates webhook registration using the BambooHR API. The workflow: Uses the BambooHR API to programmatically register the PostBin URL as a webhook. Retrieves the most recent webhook calls made by BambooHR to the PostBin URL. (Optional) Sends a personalized Slack message for new employees using OpenAI. Set Up Steps Estimated time:** ~15–20 minutes Set up PostBin using the steps from the first section. Log into BambooHR to generate an API key for authentication. Run the workflow to register the PostBin URL as a webhook in BambooHR via the API. Retrieve recent webhook calls from PostBin to validate data reception. (Optional) Send a Slack notification using the processed data.
by Leonardo Grigorio
Video explanation This n8n workflow helps you identify trending videos within your niche by detecting outlier videos that significantly outperform a channel's average views. It automates the process of monitoring competitor channels, saving time and streamlining content research. Included in the Workflow Automated Competitor Video Tracking Monitors videos from specified competitor channels, fetching data directly from the YouTube API. Outlier Detection Based on Channel Averages Compares each video’s performance against the channel’s historical average to identify significant spikes in viewership. Historical Video Data Management Stores video statistics in a PostgreSQL database, allowing the workflow to only fetch new videos and optimize API usage. Short Video Filtering Automatically removes short videos based on duration thresholds. Flexible Video Retrieval Fetches up to 3 months of historical data on the first run and only new videos on subsequent runs. PostgreSQL Database Integration Includes SQL queries for database setup, video insertion, and performance analysis. Configurable Outlier Threshold Focuses on videos published within the last two weeks with view counts at least twice the channel's average. Data Output for Analysis Outputs best-performing videos along with their engagement metrics, making it easier to identify trending topics. Requirements n8n installed on your machine or server A valid YouTube Data API key Access to a PostgreSQL database This workflow is intended for educational and research purposes, helping content creators gain insights into what topics resonate with audiences without manual daily monitoring.
by Joseph LePage
💡🌐 Essential Multipage Website Scraper with Jina.ai Use responsibly and follow local rules and regulations This N8N workflow enables automated multi-page website scraping using Jina.ai's powerful web scraping capabilities, with seamless integration to Google Drive for content storage. Here's how it works: Main Features The workflow automatically scrapes multiple pages from a website's sitemap and saves each page's content as a separate Google Drive document. Key Components Input Configuration Starts with a sitemap URL (default: https://ai.pydantic.dev/sitemap.xml)** Processes the sitemap to extract individual page URLs Includes filtering options to target specific topics or pages Scraping Process Uses Jina.ai's web scraper to extract content from each URL Converts webpage content into clean markdown format Extracts page titles automatically for document naming Storage Integration Creates individual Google Drive documents for each scraped page Names documents using the format "URL - Page Title" Saves content in markdown format for better readability Usage Instructions Set your target website's sitemap URL in the "Set Website URL" node Configure the "Filter By Topics or Pages" node to select specific content Adjust the "Limit" node (default: 20 pages) to control batch size Connect your Google Drive account Run the workflow to begin automated scraping Additional Features Built-in rate limiting through the Wait node to prevent overloading servers Batch processing capability for handling large sitemaps The workflow requires no API key for Jina.ai, making it accessible for immediate use while maintaining responsible scraping practices.
by johappel
Main Workflow “AI Nextcloud” Entry point**: A public chat-trigger greets the user; every incoming chat message starts the flow. AI agent**: A LangChain agent (“AI Nextcloud”) uses the configured OpenAI model plus short-term memory to continue the dialogue in context. Purpose**: Answers questions about files stored in a Nextcloud folder. The user simply includes the folder path in their question. Tool integration**: Calls the sub-workflow “Nextcloud Tool” whenever it needs to read files and pass their text back to the AI. Sub-Workflow “Nextcloud Tool” Invocation: Triggered by other workflows with the input parameter path (folder path). File listing: Retrieves every file in the specified folder via the Nextcloud API. Filter: Allows only readable formats (PDF, Markdown, DOCX). Download & text extraction PDF → Text via Extract From File Markdown → Raw text DOCX → Text via community node word2text Aggregation: Combines all extracted text into a single output field and returns it. > Outcome: Each call yields the plain content of every supported file in a Nextcloud folder—providing rich context for the AI agent to answer user questions accurately.
by Mutasem
Use case Slackbots are super powerful. At n8n, we have been using them to get a lot done.. But it can become hard to manage and maintain many different operations that a workflow can do. This is the base workflow we use for our most powerful internal Slackbots. They handle a lot from running e2e tests for Github branch to deleting a user. By splitting the workflow into many subworkflows, we are able to handle each command seperately, making it easier to debug as well as support new usecases. In this template, you can find eveything to setup your own Slackbot (and I made it simple, there's only one node to configure 😉). After that, you need to build your commands directly. This bot can create a new thread on an alerts channel and respond there. Or reply directly to the user. It responds for help request to return a help page. It automatically handles unknown commands. It also supports flags and environment variables. For example /cloudbot-test info mutasem --full-info -e env=prod would give you the following info, when calling subworkflow. How to setup Add Slack command and point it up to the webhook. For example. Add the following to the Set config node alerts_channel with alerts channel to start threads on instance_url with this instance url to make it easy to debug slack_token with slack bot token to validate request slack_secret_signature with slack secret signature to validate request help_docs_url with help url to help users understand the commands Build other workflows to call and add them to commands in Set Config. Each command must be mapped to a workflow id with an Execute Workflow Trigger node Activate workflow 🚀 How to adjust Add your own commands. Depending on your need, you might need to lock down who can call this.
by Harshil Agrawal
This workflow allows you to insert and retrieve data from a table in Stackby. Set node: The Set node is used to set the values for the name and id fields for a new record. You might want to add data from an external source, for example an API or a CRM. Based on your use-case, add the respective node before the Set node and configure your Set node accordingly. Stackby node: This node appends data from the previous node to a table in Stackby. Based on the values you want add to your table, enter the column names in the Column field. Stackby1 node: This node fetches all the data that is stored in the table in Stackby.
by Cheney Zhang
Create a RAG System with Paul Essays, Milvus, and OpenAI for Cited Answers This workflow automates the process of creating a document-based AI retrieval system using Milvus, an open-source vector database. It consists of two main steps: Data collection/processing Retrieval/response generation The system scrapes Paul Graham essays, processes them, and loads them into a Milvus vector store. When users ask questions, it retrieves relevant information and generates responses with citations. Step 1: Data Collection and Processing Set up a Milvus server using the official guide Create a collection named "my_collection" Execute the workflow to scrape Paul Graham essays: Fetch essay lists Extract names Split content into manageable items Limit results (if needed) Fetch texts Extract content Load everything into Milvus Vector Store This step uses OpenAI embeddings for vectorization. Step 2: Retrieval and Response Generation When a chat message is received, the system: Sets chunks to send to the model Retrieves relevant information from the Milvus Vector Store Prepares chunks Answers the query based on those chunks Composes citations Generates a comprehensive response This process uses OpenAI embeddings and models to ensure accurate and relevant answers with proper citations. For more information on vector databases and similarity search, visit Milvus documentation.