by Miquel Colomer
This n8n workflow template uses uProc's "Get Email by Domain, Firstname and Lastname" tool to discover a professional email address, and then sends that email to a Telegram channel. > ⚠️ Note: You must set up your *uProc credentials (Email + API Key)* from the *Integration settings* before running this workflow. 🚀 What It Does Uses user-provided data: first name, last name, and company domain Calls uProc to discover the most likely email address for that person Sends the discovered email and confidence level to a Telegram group 🛠️ Step-by-Step Setup Add uProc Credentials Go to the uProc integration page and copy your email and API key. Add them as credentials in your n8n instance. Set Tool Parameters Use the Set node to define: firstname: First name of the person lastname: Last name of the person domain: Their company domain Replace the Set Node (Optional) You can dynamically fetch the firstname, lastname, and domain from other sources like: Google Sheets MySQL or Postgres Webhook or Form submissions Run the Workflow Trigger the flow manually or integrate it with a larger automation. 🔍 uProc Parameters Explained domain**: The company domain (e.g., uproc.io) firstname**: First name of the person lastname** (in parameter: language): Last name of the person mode**: verify: Verifies email in real-time with mail server guess: Guesses based on company format (e.g., firstname.lastname@domain.com) 📦 uProc Response Fields email: Discovered email address confidence: Indicates if the result is verified or risky (e.g., catch-all) score: Reliability score from 0 (unreliable) to 99 (highly reliable) 📬 Notification via Telegram After discovering the email, the result is sent to a specified Telegram channel with this format: User Miquel Colomer has next email on uproc.io: contact@uproc.io (verified - 99) Clicking the email allows you to send a message directly to the recipient. 🔐 Credentials Used uProc API** – For discovering email addresses Telegram API** – To send messages to a specific group/channel ✨ Customization Tips Loop over a list of people**: Replace the set node with a data source that contains multiple people. Filter by score or confidence** before sending. Add additional outputs**: You can send the data via Email, Slack, or save it to a database. Trigger automatically**: Combine with a webhook or time-based trigger for automation. ❓Questions? Template created by Miquel Colomer and n8nhackers.com. Need help customizing or deploying? Contact us for consulting and support.
by Angel Menendez
CallForge - AI Gong Transcript PreProcessor Transform your Gong.io call transcripts into structured, enriched, and AI-ready data for better sales insights and analytics. Who is This For? This workflow is designed for: ✅ Sales teams looking to automate call transcript formatting. ✅ Revenue operations (RevOps) professionals optimizing AI-driven insights. ✅ Businesses using Gong.io that need structured, enriched call transcripts for better decision-making. What Problem Does This Workflow Solve? Manually processing raw Gong call transcripts is inefficient and often lacks essential context for AI-driven insights. With CallForge, you can: ✔ Extract and format Gong call transcripts for structured AI processing. ✔ Enhance metadata using sales data from Salesforce. ✔ Classify speakers as internal (sales team) or external (customers). ✔ Identify external companies by filtering out free email domains (e.g., Gmail, Yahoo). ✔ Enrich customer profiles using PeopleDataLabs to identify company details and locations. ✔ Prepare transcripts for AI models by structuring conversations and removing unnecessary noise. What This Workflow Does 1. Retrieves Gong Call Data Calls the Gong API to extract call metadata, speaker interactions, and collaboration details. Fetches call transcripts for AI processing. 2. Processes and Cleans Transcripts Converts call transcripts into structured, speaker-based dialogues. Assigns each speaker as either Internal (Sales Team) or External (Customer). 3. Extracts Company Information Retrieves Salesforce data** to match customers with existing sales opportunities. Filters out free email domains* to determine the *customer’s actual company domain**. Calls the PeopleDataLabs API to retrieve additional company data and location details. 4. Merges and Enriches Data Combines Gong metadata, Salesforce customer details and insights**. Ensures all necessary data is available for AI-driven sales insights. 5. Final Formatting for AI Processing Merges all call transcript data into a single structured format for AI analysis. Extracts the final cleaned, enriched dataset for further AI-powered insights. How to Set Up This Workflow 1. Connect Your APIs 🔹 Gong API Access – Set up your Gong API credentials in n8n. 🔹 Salesforce Setup – Ensure API access if you want customer enrichment. 🔹 PeopleDataLabs API – Required to retrieve company and location details based on email domains. 🔹 Webhook Integration – Modify the webhook call to push enriched call data to an internal system. CallForge - 01 - Filter Gong Calls Synced to Salesforce by Opportunity Stage CallForge - 02 - Prep Gong Calls with Sheets & Notion for AI Summarization CallForge - 03 - Gong Transcript Processor and Salesforce Enricher CallForge - 04 - AI Workflow for Gong.io Sales Calls CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync CallForge - 06 - Automate Sales Insights with Gong.io, Notion & AI CallForge - 07 - AI Marketing Data Processing with Gong & Notion CallForge - 08 - AI Product Insights from Sales Calls with Notion How to Customize This Workflow 💡 Modify Data Sources – Connect different CRMs (e.g., HubSpot, Zoho) instead of Salesforce. 💡 Expand AI Analysis – Add another AI model (e.g., OpenAI GPT, Claude) for advanced conversation insights. 💡 Change Speaker Classification Rules – Adjust internal vs. external speaker logic to match your team’s structure. 💡 Filter Specific Customers – Modify the free email filtering logic to better fit your company’s needs. Why Use CallForge? 🚀 Automate Gong call transcript processing to save time. 📊 Improve AI accuracy with enriched, structured data. 🛠 Enhance sales strategy by extracting actionable insights from calls. Start optimizing your Gong transcript analysis today!
by Ferenc Erb
Overview An automation workflow that creates a complete REST API for digitally signing PDF documents using n8n webhooks. This service demonstrates how to implement secure document signing functionality through standardized API endpoints with file upload and download capabilities. Use Case This workflow is designed for developers and automation specialists who need to implement digital document signing. It's particularly useful for: Integrating PDF signing capabilities into existing document workflows API-based automation of signature processes Creating proof-of-concept implementations for document verification systems Learning n8n's webhook capabilities and file handling techniques Testing PDF signing in development environments before production implementation What This Workflow Does API-Based Document Management Exposes RESTful webhook endpoints for all document operations Handles multipart/form-data uploads for PDF documents Processes JSON payloads for signing configuration Provides download functionality for completed documents Digital Certificate Handling Uploads existing PFX/PKCS#12 digital certificates Generates new certificates with customizable attributes Securely manages certificate storage and access Associates certificates with signing operations Cryptographic PDF Signing Applies digital signatures using industry-standard cryptographic methods Embeds signature information within PDF document structure Validates document integrity through cryptographic verification Preserves original document while adding signature elements Webhook Integration System Routes different API methods to appropriate handlers Validates request payloads and file content Manages authentication through webhook paths Returns structured responses for integration with other systems Technical Architecture Components API Gateway: n8n webhook nodes that receive external requests Request Router: Switch nodes that direct operations based on method parameters Document Processor: Function nodes for PDF manipulation and verification Certificate Manager: Specialized nodes for cryptographic key operations Storage Interface: File operation nodes for document persistence Response Formatter: Nodes that structure API responses Integration Flow Client Request → Webhook Endpoint → Method Router → Processing Engine → Digital Signing → Storage → Response Generation → Client Response Setup Instructions Prerequisites n8n installation (minimum version 0.214.0) Node.js 14 or higher Required environment variable: NODE_FUNCTION_ALLOW_EXTERNAL: "node-forge,@signpdf/signpdf,@signpdf/signer-p12,@signpdf/placeholder-plain" Configuration Steps Import Workflow Import the workflow JSON into your n8n instance Activate the workflow to enable the webhooks Configure Storage Set the storage path variables in the workflow Ensure proper permissions on the storage directories Test API Endpoints Use the included test scripts to verify functionality Test PDF upload, certificate generation, and signing Integration Document the webhook URLs for integration with other systems Configure error handling according to your requirements Testing Methods Test the workflow functionality using various HTTP requests and JSON data: Upload PDF documents to the document processing endpoint Upload or generate digital certificates Execute PDF signing operations Download signed documents from the download endpoint Webhook Endpoints The workflow exposes two primary webhook endpoints that form a complete API for PDF digital signing operations: 1. Document Processing Endpoint (/webhook/docu-digi-sign) This endpoint handles all document and certificate operations: Method: Upload PDF HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Upload Certificate HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Generate Certificate HTTP: POST Content-Type: application/json Parameters: method, subjectCN, issuerCN, serialNumber, validFrom, validTo, password Method: Sign PDF HTTP: POST Content-Type: application/json Parameters: method, inputPdf, pfxFile, pfxPassword 2. Document Download Endpoint (/webhook/docu-download) This endpoint handles the retrieval of processed documents: Method: Download Signed PDF HTTP: GET Content-Type: application/json Parameters: method, fileType, fileName Key Workflow Sections The workflow is organized into logical sections with clear responsibilities: Request Processing**: Parses incoming webhook data Method Routing**: Directs requests to appropriate handlers Document Management**: Handles file operations and storage Cryptographic Operations**: Manages signing and certificate functions Response Formatting**: Structures and returns results
by Joseph LePage
Compare Local Ollama Vision Models for Image Analysis using Google Docs Process images using locally hosted Ollama Vision Models to extract detailed descriptions, contextual insights, and structured data. Save results directly to Google Docs for efficient collaboration. Who is this for? This workflow is ideal for developers, data analysts, marketers and AI enthusiasts who need to process and analyze images using locally hosted Ollama Vision Language Models. It’s particularly useful for tasks requiring detailed image descriptions, contextual analysis, and structured data extraction. What problem is this workflow solving? / Use Case The workflow solves the challenge of extracting meaningful insights from images in exhaustive detail, such as identifying objects, analyzing spatial relationships, extracting textual elements, and providing contextual information. This is especially helpful for applications in real estate, marketing, engineering, and research. What this workflow does This workflow: Downloads an image file from Google Drive. Processes the image using multiple Ollama Vision Models (e.g., Granite3.2-Vision, Gemma3, Llama3.2-Vision). Generates detailed markdown-based descriptions of the image. Saves the output to a Google Docs file for easy sharing and further analysis. Setup Ensure you have access to a local instance of Ollama. https://ollama.com/ Pull the Ollama vision models. Configure your Google Drive and Google Docs credentials in n8n. Provide the image file ID from Google Drive in the designated node. Update the list of Ollama vision models Test the workflow by clicking ‘Test Workflow’ to trigger the process. How to customize this workflow to your needs Replace the image source with another provider if needed (e.g., AWS S3 or Dropbox). Modify the prompts in the "General Image Prompt" node to suit specific analysis requirements. Add additional nodes for post-processing or integrating results into other platforms like Slack or HubSpot. Key Features: Detailed Image Analysis**: Extracts comprehensive details about objects, spatial relationships, text elements, and contextual settings. Multi-Model Support**: Utilizes multiple vision models dynamically for optimal performance. Markdown Output**: Formats results in markdown for easy readability and documentation. Google Drive Integration**: Seamlessly downloads images and saves results to Google Docs.
by lin@davoy.tech
This workflow template, "Chinese Translator via Line x OpenRouter (Text & Image)" is designed to provide seamless Chinese translation services directly within the LINE messaging platform. By integrating with OpenRouter.ai and advanced language models like Qwen, this workflow translates text or images containing Chinese characters into pinyin and English translations, making it an invaluable tool for language learners, travelers, and businesses operating in multilingual environments. This template is ideal for: Language Learners: Who want to practice Chinese by receiving instant translations of text or images. Travelers: Looking for quick translations of Chinese signs, menus, or documents while abroad. Educators: Teaching Chinese language courses and needing tools to assist students with translations. Businesses: Operating in multilingual markets and requiring efficient communication tools. Automation Enthusiasts: Seeking to build intelligent chatbots that can handle language translation tasks. What Problem Does This Workflow Solve? Translating Chinese text or images into English and pinyin can be challenging, especially for beginners or those without access to reliable translation tools. This workflow solves that problem by: Automatically detecting and translating text or images containing Chinese characters. Providing accurate translations in both pinyin and English for better comprehension. Supporting multiple input formats (text, images) to cater to diverse user needs. Sending replies directly to users via the LINE messaging platform , ensuring accessibility and ease of use. What This Workflow Does 1) Receive Messages via LINE Webhook The workflow is triggered when a user sends a message (text, image, or other types) to the LINE bot. 2) Display Loading Animation A loading animation is displayed to reassure the user that their request is being processed. 3) Route Input Types The workflow uses a Switch node to determine the type of input (text, image, or unsupported formats). If the input is text , it is sent to the OpenRouter.ai API for translation. If the input is an image , the workflow extracts the image content, converts it to base64, and sends it to the API for translation. Unsupported formats trigger a polite response indicating the limitation. 4) Translate Content Using OpenRouter.ai The workflow leverages Qwen models from OpenRouter.ai to generate translations: For text inputs, it provides Chinese characters , pinyin , and English translations . For images, it extracts and translates using the qwen-VL model which can take images 5) Reply with Translations The translated content is formatted and sent back to the user via the LINE Reply API. Setup Guide Pre-Requisites Access to the LINE Developers Console to configure your webhook and channel access token. An OpenRouter.ai account with credentials to access Qwen models. Basic knowledge of APIs, webhooks, and JSON formatting. Step-by-Step Setup 1) Configure the LINE Webhook: Go to the LINE Developers Console and set up a webhook to receive incoming messages. Copy the Webhook URL from the Line Webhook node and paste it into the LINE Console. Remove any "test" configurations when moving to production. 2) Set Up OpenRouter.ai: Create an account on OpenRouter.ai and obtain your API credentials. Connect your credentials to the OpenRouter nodes in the workflow. 3) Test the Workflow: Simulate sending text or images to the LINE bot to verify that translations are processed and replied correctly. How to Customize This Workflow to Your Needs Add More Languages: Extend the workflow to support additional languages by modifying the API calls. Enhance Image Processing: Integrate more advanced OCR tools to improve text extraction from complex images. Customize Responses: Modify the reply format to include additional details, such as grammar explanations or cultural context. Expand Use Cases: Adapt the workflow for specific industries, such as tourism or e-commerce, by tailoring the translations to relevant vocabulary. Why Use This Template? Real-Time Translation: Provides instant translations of text and images, improving user experience and accessibility. Multimodal Support: Handles both text and image inputs, catering to diverse user needs. Scalable: Easily integrate into existing systems or scale to support multiple users and workflows. Customizable: Tailor the workflow to suit your specific audience or industry requirements.
by Carlos Contreras
Introduction This workflow is designed to create and attach notes or comments to any record in your Odoo instance. It acts as a sub-workflow that can be triggered by a main workflow to log messages or comments in a centralized manner. By leveraging the powerful Odoo API, this template ensures that updates to records are handled efficiently, providing an organized way to document important information related to your business processes. Setup Instructions Import the Workflow: Import the provided JSON file into your n8n instance. Odoo Credentials: Ensure you have valid Odoo API credentials (e.g., "Roodsys Odoo Automation Account") configured in n8n. Node Configuration: Verify that the "Odoo" node (consider renaming it to "Odoo Record Manager" for clarity) is set up with your server details and authentication parameters. Check that the workflow trigger ("When Executed by Another Workflow") is configured to receive input parameters from the parent workflow. Execution Trigger: This workflow is designed to be initiated by another workflow. Make sure the main workflow supplies the required inputs. Workflow Details Trigger Node: The workflow begins with the "When Executed by Another Workflow" node, which accepts three inputs: rec_id: A numeric identifier for the Odoo record. message: The text of the comment or note. model: The specific Odoo model (e.g., rs.deployment.action.log) where the note should be attached. Odoo Node: The second node in the workflow calls the Odoo API to create a new log message. It maps the inputs as follows: message_type is set to "comment". model is assigned the provided model name. res_id is assigned the record ID (rec_id). body is assigned the message content. Additional Information: A sticky note node is included to provide a brief overview of the workflow’s purpose directly within the interface. Input Parameters Record ID (rec_id): The unique identifier of the record in Odoo where the note will be added. Message (message): The content of the comment or note that is to be logged. Model (model): The Odoo model name indicating the context in which the note should be created (e.g., rs.deployment.action.log). Usage Examples Internal Logging: Use the workflow to attach internal comments or logs to specific records, such as customer profiles, orders, or deployment logs. Audit Trails: Create a comprehensive audit trail by documenting changes or important events in Odoo records. Integration with Other Workflows: Link this workflow with other automation processes in n8n (like email notifications, data synchronization, or reporting) to create a seamless integration across your systems. Pre-conditions The Odoo instance must be accessible and correctly configured. API permissions and user roles should be validated to ensure that the workflow has the necessary access rights. The workflow expects inputs from an external trigger or parent workflow. Customization & Integration This template offers several customization options to tailor it to your needs: Field Customization: Modify or add new fields to match your logging or commenting requirements. Node Renaming: Rename nodes for better clarity and consistency within your workflow ecosystem. Integration Possibilities: Easily integrate this workflow with other processes in n8n, such as triggering notifications or synchronizing data across different systems. This sub-workflow receives data from a main workflow (for example, a record ID, a message, and the Odoo model) and creates a new note (or comment) in the corresponding Odoo record. Essentially, it acts as a centralized point for logging comments or notes in a specific Odoo model, ensuring that the information remains organized and easy to track. Your model must inherit from _inherit = ['portal.mixin', 'mail.thread.main.attachment']
by Miquel Colomer
This n8n workflow template automates the process of finding LinkedIn profiles for a person based on their name, and company. It scrapes Google search results via Bright Data, parses the results with GPT-4o-mini, and delivers a personalized follow-up email with insights and suggested outreach steps. 🚀 What It Does Accepts a user-submitted form with a person’s full name, and company. Performs a Google search using Bright Data to find LinkedIn profiles and company data. Uses GPT-4o-mini to parse HTML results and identify matching profiles. Filters and selects the most relevant LinkedIn entry. Analyzes the data to generate a buyer persona and follow-up strategy. Sends a styled email with insights and outreach steps. 🛠️ Step-by-Step Setup Deploy the form trigger to accept person data (name, position, company). Build a Google search query from user input. Scrape search results using Bright Data. Extract HTML content using the HTML node. Use GPT-4o-mini to parse LinkedIn entries and company insights. Filter for matches based on user input. Merge relevant data and generate personalized outreach content. Send email to a predefined address. Show a final confirmation message to the user. 🧠 How It Works: Workflow Overview Trigger:** When User Completes Form Search:** Edit Url LinkedIn, Get LinkedIn Entry on Google, Extract Body and Title, Parse Google Results Matching:** Extract Parsed Results, Filter, Limit, IF LinkedIn Profile is Found? Fallback:** Form Not Found if no match Company Lookup:** Edit Company Search, Get Company on Google, Parse Results, Split Out Content Generation:** Merge, Create a Followup for Company and Person Email Delivery:** Send Email, Form Email Sent 📨 Final Output An HTML-styled email (using Tailwind CSS) with: Matched LinkedIn profile Company insights Persona-based outreach strategy 🔐 Credentials Used BrightData account** for scraping Google search results OpenAI account** for GPT-4o-mini-powered parsing and content generation SMTP account** for sending follow-up emails ❓Questions? Template and node created by Miquel Colomer and n8nhackers. Need help customizing or deploying? Contact us for consulting and support.
by Joseph
Here is your refined template description with detailed step-by-step instructions, markdown formatting, and customization guidance. YouTube Transcript Extraction Workflow This n8n workflow extracts and processes transcripts from YouTube videos using the YouTube Transcript API on RapidAPI. It allows users to retrieve subtitles from YouTube videos, clean them up, and return structured transcript data for further processing. Table of Contents Problem Statement & Target Audience Pre-conditions & API Requirements Step-by-Step Workflow Explanation Customization Guide How to Set Up This Workflow Problem Statement & Target Audience Who is this for? This workflow is ideal for content creators, researchers, and developers who need to: Extract subtitles from YouTube videos automatically. Format and clean** transcript data for readability. Use transcripts for summarization, content repurposing, or language analysis. Pre-conditions & API Requirements API Required YouTube Transcript API** (RapidAPI) n8n Setup Prerequisites A running n8n instance (Installation Guide) A RapidAPI account to access the YouTube Transcript API An API key from RapidAPI to authenticate requests Step-by-Step Workflow Explanation 1. Input YouTube Video URL (Trigger) This step provides a simple input form where users enter a YouTube video URL. 2. HTTP Request Node (Retrieve Transcript Data) Makes a POST request to the YouTube Transcript API via RapidAPI. Passes the video URL received from the input form. Uses an environment variable to store the API key securely. 3. Function Node (Process Transcript) Receives* the API response containing the *raw transcript**. Processes and cleans** the transcript: Removes unwanted characters. Formats text for readability. Handles errors** when no transcript is available. Outputs* both the *raw and cleaned transcript** for further use. 4. Set Field Node (Response Formatting) Structures** the processed transcript data into a user-friendly format. Returns** the final transcript data to the client. Customization Guide 1. Modify Transcript Cleaning Rules Update the Function Node to apply custom text processing, such as: Removing timestamps. Changing the output format (e.g., JSON, plain text). 2. Store Transcripts in a Database Add a Database Node (e.g., MySQL, PostgreSQL, or Firebase) to save transcripts. 3. Generate Summaries from Transcripts Integrate AI services (e.g., OpenAI, Google Gemini) to summarize transcripts. 4. Convert Transcripts into Speech Use ElevenLabs API to generate an AI-powered voiceover from transcripts. How to Set Up This Workflow Step 1: Import the Workflow into n8n Download or copy the workflow JSON file. Import it into your n8n instance. Step 2: Set Up the API Key Sign up for the YouTube Transcript API. Subscribe to the api. Copy and paste your api key where the "your_api_key" is. Step 3: Activate the Workflow Start the workflow in n8n. Enter a YouTube video URL in the input form. The workflow will return a cleaned transcript. This workflow ensures seamless YouTube transcript extraction and processing with minimal manual effort. 🚀
by Jason Guest
Automatically deploy n8n workflows by simply dropping JSON files into a Google Drive folder—this template watches for new exports, cleans and imports them into your n8n instance, applies a tag, and then archives the processed files. Who is this template for? This workflow template is designed for n8n power users, and automation specialists who need a simple, reliable way to bulk‑deploy or version‑control n8n workflows via Google Drive. It’s perfect if you: Manage multiple n8n instances (staging, production, etc.) Want an easy “drop‑in” approach to publish new or updated workflows Prefer storing/exporting JSON in Drive rather than editing in the UI Use case Manually importing .json exports into n8n is slow and error‑prone. With this template you can: Keep your workflows in a shared Drive folder (version control friendly) Automatically sanitize each file so only supported settings go through Tag deployed workflows consistently for easy filtering Move processed files to a “Deployed” folder for clear change tracking How it works Watch “ToDeploy” folder in Google Drive for new .json files Download & parse each file into a JSON object Clean payload: strip out everything except the allowed executionOrder (and timezone if you choose) POST the cleaned workflow to your n8n instance via /api/v1/workflows PUT a predefined tag onto the newly created workflow Move file to your “Deployed” folder when import succeeds, or capture the workflow name & error if it fails Setup instructions 1. In Google Drive create a ToDeploy folder and a Deployed folder Update "Google Drive Trigger -ToDeploy folder" to your ToDeploy folder Update "Move JSON file to Deployed folder" to you Deployed folder 2. Create a n8n API key: +Go to Settings > n8n API +Select Create an API key +Copy API Key 3. In "Get Existing Workflow Tags" node: Create n8n API Authentication Authentication: Predefined Credential Type Credential Type: n8n API Create new credential: +Paste in API key +Baseurl: https://SUBDOMAIN.YOURDOMAINNAME.com/api/v1/ 4. Add n8n API authentication to: "Create n8n Workflow" node "Set Workflow Tag" node 5. Add your N8N instance URL to the N8N_Instance_URL variable in "Set n8n URL variable" node. 6. Run "1. Get Workflow Tags" flow and copy the ID of your chosen tag. 7. In "Set n8n API URL & Tag ID variables" node: Add the Workflow Tag ID to the N8N_Instance_Tag variable Add your N8N instance URL to the N8N_Instance_URL variable 8. Set workflow to Active How to adjust it to your needs Use different tags: run Get Existing Workflow Tags on start‑up to refresh available tags, or hard‑code multiple tags in the Set Workflow Tag node. Add notifications**: connect the error branch to Slack or Email nodes so you get alerted if an import fails. Swap Drive for another storage**: replace Google Drive nodes with Dropbox, S3, or GitHub triggers if you prefer a different source for your JSON files.
by n8n Team
This workflow sends a message to a Discord channel when a new row is added or a row is updated in a Google Sheet. The message will send all data rows in the Google Sheet. Prerequisites Discord account and Discord credentials. Google account and Google credentials. How it works Using a code node, we can use the obtained Google Sheet data to create a custom message that will be sent to Discord. The message will be sent to the Discord channel specified in the Discord node. Setup This workflow requires that you set up a Discord webhook and have an existing Google Sheet with data. See how to set up a Discord webhook here.
by Nico Kowalczyk
Description: This template facilitates the transfer of a folder, along with all its files and subfolders, within a Nextcloud instance. The Nextcloud user must have access to both the source and destination folders. While Nextcloud allows folder movement, complications may arise when dealing with external storage that has rate limits. This workflow ensures the individual transfer of each file to avoid exceeding rate limits, particularly useful for setups involving external storage with rate limitations. How it works: Identify all files and subfolders within the specified source folder. Recursive search within subfolders for additional files. Replicate the folder structure in the target folder. Individually move each identified file to the corresponding location in the target folder. Set up steps: Set Nextcloud credentials for all Nextcloud nodes involved in the process. -Edit the trigger settings. Detailed instructions can be found within the respective trigger configuration. Initiate the workflow to commence the folder transfer process. Help If you need assistance with applying this template, feel free to reach out to me. You can find additional information about me and my services here. => https://nicokowalczyk.de/links I have also produced a video where I explain the workflow and provide an example. You can find this video over here. https://youtu.be/K1kmG_Q_jRk Cheers. Nico Kowalczyk
by ScrapeOps
Amazon Product Price Tracker This workflow automatically monitors Amazon product prices, tracks price changes, and sends alerts when significant price fluctuations occur. Built with ScrapeOps' structured data API, it provides a reliable, maintenance-free solution for price tracking without worrying about anti-bot measures or complex selectors. What This Workflow Does Monitors multiple Amazon products simultaneously using their ASINs Calculates both absolute and percentage price changes Sends customizable email alerts when prices cross defined thresholds Maintains a historical record of all price data for trend analysis Updates a Google Sheets with the latest price information Prerequisites A ScrapeOps API key (register at https://scrapeops.io) Google account for Google Sheets integration SMTP email configuration for alerts Setup Instructions Spreadsheet Setup Make a copy of the template spreadsheet: https://docs.google.com/spreadsheets/d/1hRv-TBXrpN6rkIU65WorttNHt-IPWas_An0sF4Of39U Add your Amazon product ASINs in the "Products to Monitor" sheet Set your desired alert thresholds for price increases/decreases Workflow Configuration Add your ScrapeOps API key to the "Setup" node Update the spreadsheet URL in the "Setup" node with YOUR copy Configure your email settings for notifications Adjust the schedule frequency as needed (default: hourly) How It Works The workflow reads product ASINs from your Google Sheet, fetches current pricing data via ScrapeOps' Amazon Product API, calculates price changes, updates your spreadsheet, and sends alerts when price movements exceed your defined thresholds. Unlike traditional web scrapers that break when websites change, this solution uses ScrapeOps' reliable API that handles all the complexity of Amazon data extraction, ensuring consistent results without maintenance. Additional Notes This workflow is ideal for deal hunters, price comparison services, and e-commerce analytics The alerting system can be extended to additional channels like Slack or Telegram ScrapeOps handles all anti-bot measures, proxy management, and parsing complexities