by Hugo
This workflow provides a robust solution for automatically backing up all your n8n workflows to a designated GitHub repository on a daily basis. By leveraging the n8n API and GitHub API, it ensures your workflows are version-controlled and securely stored, safeguarding against data loss and facilitating disaster recovery. How it works The automation follows these key steps: Scheduled trigger: The workflow is initiated automatically every day at a pre-configured time. List existing backups: It first connects to your GitHub repository to retrieve a list of already backed-up workflow files. This helps in determining whether a workflow's backup file needs to be created or updated. Retrieve n8n workflows: The workflow then fetches all current workflows directly from your n8n instance using the n8n REST API. Process and prepare: Each retrieved workflow is individually processed. Its data is converted into JSON format. This JSON content is then encoded to base64, a format suitable for GitHub API file operations. Commit to GitHub: For each n8n workflow: A standardized filename is generated (e.g., workflow-name-tag.json). The workflow checks if a file with this name already exists in the GitHub repository (based on the list fetched in step 2). If the file exists: It updates the existing file with the latest version of the workflow. If it's a new workflow (file doesn't exist): A new file is created in the repository. Each commit is timestamped for clarity. This process ensures that you always have an up-to-date version of all your n8n workflows stored securely in your GitHub version control system, providing peace of mind and a reliable backup history. Pre-requisites Before you can use this template, please ensure you have the following: An active n8n instance (self-hosted or cloud). A GitHub account. A GitHub repository created where you want to store the workflow backups. A GitHub Personal Access Token with repo scope (or fine-grained token with read/write access to the specific backup repository). This token will be used for GitHub API authentication. n8n API credentials (API key) for your n8n instance. Set up steps Setting up this workflow should take approximately 10-15 minutes if you have your credentials ready. Import the template: Import this workflow into your n8n instance. Configure n8n API credentials: Locate the "Retrieve workflows" node. In the "Credentials" section for "n8n API", create new credentials (or select existing ones). Enter your n8n instance URL and your n8n API Key (you can create your n8n api key in the settings of your n8n instance) Configure GitHub credentials: Locate the "List files from repo" node (and subsequently "Update file" / "Upload file" nodes which will use the same credential). In the "Credentials" section for "GitHub API", create new credentials. Select OAuth2/Personal Access Token authentication method. Enter the GitHub Personal Access Token you generated as per the pre-requisites. Specify repository details: In the "List files from repo", "Update file", and "Upload file" GitHub nodes: Set the Owner: Your GitHub username or organization name. Set the Repository: The name of your GitHub repository dedicated to backups. Set the Branch (e.g., main or master) where backups should be stored. (Optional) Specify a Path within the repository if you want backups in a specific folder (e.g., n8n_backups/). Leave blank to store in the root. Adjust schedule (Optional): Select the "Schedule Trigger" node. Modify the trigger interval (e.g., change the time of day or frequency) as needed. By default, it's set for a daily run. Activate the workflow: Save and activate the workflow. Explanation of nodes Here's a detailed breakdown of each node used in this workflow: Schedule trigger** Type: n8n-nodes-base.scheduleTrigger Purpose: This node automatically starts the workflow based on a defined schedule (e.g., daily at midnight). List files from repo** Type: n8n-nodes-base.github Purpose: Connects to your specified GitHub repository and lists all files, primarily to check for existing workflow backups. Aggregate** Type: n8n-nodes-base.aggregate Purpose: Consolidates the list of file names obtained from the "List files from repo" node into a single item for easier lookup later in the "Check if file exists" node. Retrieve workflows** Type: n8n-nodes-base.n8n Purpose: Uses the n8n API to fetch a list of all workflows currently present in your n8n instance. Json file** Type: n8n-nodes-base.convertToFile Purpose: Takes the data of each workflow (retrieved by the "Retrieve workflows" node) and converts it into a structured JSON file format. To base64** Type: n8n-nodes-base.extractFromFile Purpose: Converts the binary content of the JSON file (from the "Json file" node) into a base64 encoded string. This encoding is required by the GitHub API for file content. Commit date & file name** Type: n8n-nodes-base.set Purpose: Prepares metadata for the GitHub commit. It generates: commitDate: The current date and time for the commit message. fileName: A standardized file name for the workflow backup (e.g., my-workflow-vps-backups.json), typically using the workflow's name and its first tag. Check if file exists** Type: n8n-nodes-base.if Purpose: A conditional node. It checks if the fileName (generated by "Commit date & file name") is present in the list of files aggregated by the "Aggregate" node. This determines if the workflow backup already exists in GitHub. Update file** Type: n8n-nodes-base.github Purpose: If the "Check if file exists" node determines the file does exist, this node updates that existing file in your GitHub repository with the latest workflow content (base64 encoded) and a commit message. Upload file** Type: n8n-nodes-base.github Purpose: If the "Check if file exists" node determines the file does not exist, this node creates and uploads a new file to your GitHub repository with the workflow content and a commit message. Customization Here are a few ways you can customize this template to better fit your needs: Backup path**: In the GitHub nodes ("List files from repo", "Update file", "Upload file"), you can specify a Path parameter to store backups in a specific folder within your repository (e.g., workflows/ or daily_backups/). Filename convention**: Modify the "Commit date & file name" node (specifically the expression for fileName) to change how backup files are named. For example, you might want to include the workflow ID or a different date format. Commit messages**: Customize the commit messages in the "Update file" and "Upload file" GitHub nodes to include more specific information if needed. Error handling**: Consider adding error handling branches (e.g., using the "Error Trigger" node or checking for node execution failures) to notify you if a backup fails for any reason. Filtering workflows**: If you only want to back up specific workflows (e.g., those with a particular tag or name pattern), you can add a "Filter" node after "Retrieve workflows" to include only the desired workflows in the backup process. Backup frequency**: Adjust the "Schedule Trigger" node to change how often the backup runs (e.g., hourly, weekly, or on specific days). Template was created in n8n v1.92.2
by Billy Christi
Who is this for? This workflow is perfect for: Companies that manage invoices through Google Drive Business owners who want to minimize manual data entry and maximize accuracy Accounting teams and finance departments seeking to automate invoice processing What problem is this workflow solving? Processing invoices manually is time-consuming, error-prone, and inconsistent. This workflow solves those issues by: Automating invoice processing** from detection to data extraction to storage Improving accuracy** by using AI to extract key invoice data fields reliably Reducing human workload** while maintaining compliance and consistency What this workflow does This workflow creates a fully automated invoice processing system by: Monitoring a Google Drive folder for new PDF invoices in real time Downloading the PDF files and extracting their content using OCR technology Using AI (OpenAI) to parse and extract key invoice fields such as invoice number, date, total amount, vendor name, itemized details, tax, and category Validating the extracted data to ensure compliance with a structured JSON schema Storing structured data in Google Sheets for easy access, review, and reporting Key Features: AI-powered extraction handles both text-based and scanned PDF invoices Provides a structured, searchable invoice database in Google Sheets Configured to run as frequently as the user needs, ensuring timely processing. Setup Copy the Google Sheet template here: π PDF Invoice Parser β Google Sheet Template Connect your Google Drive account to the Drive Trigger and File Download nodes Add your OpenAI API key in the AI Parser node Link the Google Sheet in the final storage node Drop a test invoice PDF into the monitored Drive folder Required Credentials: OpenAI API Key** Google Drive Credentials** Google Sheets Credentials** How to customize this workflow to your needs Modify the polling interval** (default: every minute) for higher/lower frequency. Integrate with your accounting software** by adding nodes (e.g., QuickBooks, Xero). Use alternative LLM** such as Gemini, Claude.
by nero
How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. How to use Create an account and apply for an API key on https://ai.nero.com/ai-api?utm_source=n8n-base-workflow. Fill your key into the Create task and Query task status nodes. Select an AI service and modify Create task node parameters, the API doc: https://ai.nero.com/ai-api/docs. Execute the workflow so that the webhook starts listening. Make a test request by postman or other tools, the test URL from the Webhook node. You will receive the output in the webhook response. Our API doc Please create an account to access our API docs. https://ai.nero.com/ai-api/docs. Use cases Large Scale Printing Upscale images into ultra-sharp, billboard-ready masterpieces with 300+ DPI and billions of pixels. Game Assets Compression Improve your game performance with AI-Image Compression: Faster, Better & Lossless. E-commerce Image Editing Remove & replace your product image backgrounds, create virtual showrooms. Photo Retouching Remove & reduce grains & noises from images. Face Animation Transform static images into dynamic facial expression videos or GIFs with our cutting-edge Face Animation API Photo Restoration Our Al-driven Photo Restoration API offers advanced scratch removal, face enhancement, and image upscaling. Colorize Photo Transform black & white images into vivid colors. Avatar Generator Turn your selfie into custom avatars with different styles and backgrounds Website Compression Speed up your website, compress your images in bulk.
by Alex Kim
Automate Video Creation with Luma AI Dream Machine and Airtable (Part 1) Description This workflow automates video creation using Luma AI Dream Machine and n8n. It generates dynamic videos based on custom prompts, random camera motion, and predefined settings, then stores the video and thumbnail URLs in Airtable for easy access and tracking. This automation makes it easy to create high-quality videos at scale with minimal effort. π Airtable Base Template π₯ Tutorial Video Setup 1. Luma AI Setup Create an account with Luma AI. Generate an API key from Luma AI for authentication. Ensure the API key has permission to create and manage video requests. 2. Airtable Setup Create an Airtable base with the following fields: Generation ID** β To match incoming webhook data. Status** β Workflow status (e.g., "Done"). Video URL** β Stores the generated video URL. Thumbnail URL** β Stores the thumbnail URL. Prompt** β The video prompt used in the request. Aspect Ratio** β Defines the video format (e.g., 9:16). Duration** β Length of the video. π Use the Airtable template linked above to simplify setup. 3. n8n Setup Install n8n (local or cloud). Set up Luma AI and Airtable credentials in n8n. Import the workflow and customize the settings based on your needs. How It Works 1. Global Settings Configuration The Set node defines key settings such as: Prompt** β Example: "A crocheted parrot in a crocheted pirate outfit swinging on a crocheted perch." Aspect Ratio** β Example: "9:16" Loop** β Example: "true" Duration** β Example: "5 seconds" Cluster ID** β Used to group related videos for easy tracking. Callback URL** - Used for the Webhook workflow in Part 2 2. Random Camera Motion The Code node randomly selects a camera motion (e.g., Zoom In, Pan Left, Crane Up) to create dynamic and visually engaging videos. 3. API Request to Luma AI The HTTP Request node sends a POST request to Luma AIβs API with the following parameters: Prompt β Uses the defined global settings. Aspect Ratio β Matches the target platform (e.g., TikTok or YouTube). Duration β Length of the video. Loop β Determines if the video should loop. Callback URL β Sends a POST response when the video is complete. 4. Capture API Response Luma AI sends a POST response to the callback URL once video generation is complete. The response includes: Video URL β Direct link to the video. Thumbnail URL β Link to the video thumbnail. Generation ID β Used to match the record in Airtable. 5. Store in Airtable The Airtable node updates the record with the video and thumbnail URLs. Generation ID** is crucial for matching future webhook responses to the correct video record. Why This Workflow is Useful β Automates high-quality video creation β Reduces manual effort by handling prompt generation and API calls β Random camera motion makes videos more dynamic β Ensures organized tracking with Airtable β Scalable β Ideal for automating large-scale content creation Next Steps Part 2** β Handling webhook responses and updating Airtable automatically. Future Enhancements** β Adding more camera motions, multi-platform support, and automated video editing.
by Mauricio Perera
n8n Workflow: Calculate the Centroid of a Set of Vectors Overview This workflow receives an array of vectors in JSON format, validates that all vectors have the same dimensions, and computes the centroid. It is designed to be reusable across different projects. Workflow Structure Nodes and Their Functions: Receive Vectors (Webhook): Accepts a GET request containing an array of vectors in the vectors parameter. Expected Input: vectors parameter in JSON format. Example Request: /webhook/centroid?vectors=[[2,3,4],[4,5,6],[6,7,8]] Output: Passes the received data to the next node. Extract & Parse Vectors (Set Node): Converts the input string into a proper JSON array for processing. Ensures vectors is a valid array. If the parameter is missing, it may generate an error. Expected Output Example: { "vectors": [[2,3,4],[4,5,6],[6,7,8]] } Validate & Compute Centroid (Code Node): Validates vector dimensions and calculates the centroid. Validation: Ensures all vectors have the same number of dimensions. Computation: Averages each dimension to determine the centroid. If validation fails: Returns an error message indicating inconsistent dimensions. Successful Output Example: { "centroid": [4,5,6] } Error Output Example: { "error": "Vectors have inconsistent dimensions." } Return Centroid Response (Respond to Webhook Node): Sends the final response back to the client. If the computation is successful, it returns the centroid. If an error occurs, it returns a descriptive error message. Example Response: { "centroid": [4, 5, 6] } Inputs JSON array of vectors, where each vector is an array of numerical values. Example Input { "vectors": [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] } Setup Guide Create a new workflow in n8n. Add a Webhook node (Receive Vectors) to receive JSON input. Add a Set node (Extract & Parse Vectors) to extract and convert the data. Add a Code node (Validate & Compute Centroid) to: Validate dimensions. Compute the centroid. Add a Respond to Webhook node (Return Centroid Response) to return the result. Function Node Script Example const input = items[0].json; const vectors = input.vectors; if (!Array.isArray(vectors) || vectors.length === 0) { return [{ json: { error: "Invalid input: Expected an array of vectors." } }]; } const dimension = vectors[0].length; if (!vectors.every(v => v.length === dimension)) { return [{ json: { error: "Vectors have inconsistent dimensions." } }]; } const centroid = new Array(dimension).fill(0); vectors.forEach(vector => { vector.forEach((val, index) => { centroid[index] += val; }); }); for (let i = 0; i < dimension; i++) { centroid[i] /= vectors.length; } return [{ json: { centroid } }]; Testing Use a tool like Postman or the n8n UI to send sample inputs and verify the responses. Modify the input vectors to test different scenarios. This workflow provides a simple yet flexible solution for vector centroid computation, ensuring validation and reliability.
by Angel Menendez
Who is this for? This workflow is designed for teams using Slack for communication and ServiceNow for incident management. It simplifies incident lookup by enabling team members to fetch incident details directly within Slack via a Slash Command. What problem is this workflow solving? Manually switching between Slack and ServiceNow to retrieve incident details can be time-consuming and disrupt workflow efficiency. This workflow bridges the two platforms, providing instant access to critical incident information in Slack, saving time, and improving response efficiency. What this workflow does? The workflow listens for a Slash Command in Slack that includes an incident ID, extracts the ID from the incoming payload, queries ServiceNow for the corresponding incident details, and sends a formatted response back to Slack. Depending on the query result, it can: Display incident details (e.g., ID, description, severity, and priority). Notify the user if no matching incident is found. Alert the user if thereβs an issue connecting to ServiceNow. Setup Slack Setup: Create a Slash Command in Slack with the appropriate endpoint URL. Configure the command to send a POST request to the webhook endpoint of this workflow. For details on how to setup the Slack app using Slash commands and n8n, check out this video. ServiceNow Setup: Create or use an existing account with the necessary permissions to access incident data. Configure the ServiceNow node with your ServiceNow credentials. n8n Workflow Activation: Deploy and activate the workflow in your n8n instance. Ensure all nodes are properly configured and connected. How to customize this workflow to your needs Modify Incident Query Parameters:** Adjust the query logic in the Search For Incident in ServiceNow node to include additional filters or data points based on your organizationβs needs. Slack Response Customization:** Customize the Slack response template to display additional incident details or to match your teamβs tone and style. Error Handling:** Enhance the error handling nodes to include more detailed logs or send alerts to a dedicated Slack channel.
by Yaron Been
Automated pipeline that extracts job listings from Upwork and exports them to Google Sheets for better organization, analysis, and team collaboration. π What It Does Fetches job postings based on saved searches Extracts key job details (title, budget, description) Organizes data in Google Sheets Updates in real-time Supports multiple search criteria π― Perfect For Freelancers tracking opportunities Teams managing multiple projects Agencies monitoring client needs Market researchers Business analysts βοΈ Key Benefits β Centralized job board β Easy sharing with team members β Advanced filtering and sorting β Historical data tracking β Customizable data points π§ What You Need Upwork account Google account n8n instance Google Sheets setup π Data Exported Job title and description Budget and hourly rate Client information Posted date Required skills Job URL π οΈ Setup & Support Quick Setup Get started in 15 minutes with our step-by-step guide πΊ Watch Tutorial πΌ Get Expert Support π§ Direct Help Streamline your job search and opportunity tracking with automated data collection and organization.
by Halfbit π
Jura Coffee Counter: Webhook API & Google Sheets Logger βοΈ Track how many coffees your Jura E8 espresso machine makes β fully automated via webhook and Google Sheets. This workflow exposes a custom API endpoint that can be called by smart devices, such as an ESP8266 or ESP32 reading data from a Jura E8 coffee machine via Bluetooth Low Energy (BLE). The incoming data (including total coffee count) is timestamped and appended to a Google Sheet, making it easy to visualize or analyze your machine usage. β Originally built for a Jura E8, based on AlexxIT/Jura reverse-engineering project. > π This workflow uses Google Sheets as a logging backend. You can easily switch it to Airtable, Notion, or a database of your choice. Live example available at: https://halfbitstudio.com/o-nas/ > π₯οΈ In our setup, this workflow is used to provide real-time coffee consumption stats displayed directly on our website. > π Some Jura machines require an accessory Bluetooth transmitter to enable connectivity. Communication is based on the Bluetooth Low Energy (BLE) protocol. Use Case Tracking usage of a Jura coffee machine Logging IoT sensor data into Google Sheets Creating dashboards for daily consumption Smart office setups with coffee stats! Features βοΈ Two Webhook endpoints: POST /{{WEBHOOK_POST_PATH}} β receives JSON from ESP (coffee machine reader) GET /{{WEBHOOK_GET_PATH}} β returns latest records as JSON π Timestamping via Date & Time node πΉ Coffee counter extraction from incoming JSON π§Ύ Appends structured rows to Google Sheets π€ Webhook response for external status or dashboards Setup Instructions Jura Coffee Machine Integration (Hardware) Use an ESP device (e.g. ESP8266 or ESP32) to connect to the Jura E8 via Bluetooth Low Energy (BLE). Send POST requests with JSON payload: { "total_coffees": 123 } Reverse-engineered protocol reference: AlexxIT/Jura Google Sheets Configuration Create a new Google Sheet with column headers like: date | time | coffee counter Connect your Google account in n8n and authorize access to this sheet. Replace the documentId and sheetName fields in the Google Sheets nodes: Use full URL to your spreadsheet Use the actual sheet name (e.g. Sheet1) Environment Variables & Placeholders | Placeholder | Description | | ------------------------ | ----------------------------------------------- | | {{WEBHOOK_POST_PATH}} | Endpoint to receive coffee counter data | | {{WEBHOOK_GET_PATH}} | Endpoint to return latest data (for dashboards) | | {{SHEET_ID}} | Google Spreadsheet ID | | {{GOOGLE_CREDENTIALS}} | OAuth2 credentials for Google Sheets | | {{DATA_COLUMNS}} | Column names in the target sheet | Testing the Workflow Send test request: Use Postman or ESP to send a POST request to /{{WEBHOOK_POST_PATH}} Body should include total_coffees value Check Google Sheet: Open your sheet and verify that a new row was appended Test GET endpoint: Access the second webhook URL (e.g. /{{WEBHOOK_GET_PATH}}) in browser or fetch via API Optional: Use Respond to Webhook output in a dashboard or frontend Customization Tips Sheet format**: Add more columns if you want to track additional data (e.g. machine temperature, errors) Output format**: Replace Google Sheets with any other storage (e.g. MySQL, Notion) Auth layer**: Add basic auth or token verification if needed for public exposure Notifications**: Send alerts to Discord/Slack when reaching thresholds (e.g. 200 coffees brewed) Tags: google-sheets, iot, webhook, jura, coffee, api, automation
by Dariusz Koryto
FTP to Google Drive Transfer Template What This Template Does This workflow automatically transfers files from an FTP server to Google Drive. It's perfect for: Backing up files from remote servers Migrating data from FTP to cloud storage Automating file synchronization tasks Creating scheduled backups of server content How It Works The workflow follows these steps: Manual Trigger - You start the process by clicking "Execute" Lists FTP Directory - Scans the specified FTP folder for all items Filters Files Only - Separates actual files from directories (folders) Downloads Files - Retrieves each file as binary data from the FTP server Uploads to Google Drive - Stores all downloaded files in your specified Google Drive folder Requirements Before using this template, you'll need: FTP Server Access**: Server address, username, and password Google Drive Account**: With OAuth2 authentication set up in n8n n8n Instance**: Self-hosted or cloud version Setup Instructions Step 1: Configure FTP Credentials In n8n, go to Settings β Credentials Create a new FTP credential Enter your FTP server details: Host: Your FTP server address Port: Usually 21 for FTP Username: Your FTP username Password: Your FTP password Test the connection and save Step 2: Set Up Google Drive Authentication Create a new Google Drive OAuth2 credential Follow n8n's Google Drive setup guide: Create a Google Cloud project Enable Google Drive API Create OAuth2 credentials Add your n8n callback URL Authorize the connection in n8n Step 3: Configure the Workflow Update FTP Path: Open the "List FTP Directory" node Change the path parameter from /_instalki to your desired FTP folder Set Google Drive Folder: Open the "Upload to Google Drive" node Replace the folderId with your target Google Drive folder ID To find folder ID: Open the folder in Google Drive and copy the ID from the URL Assign Credentials: Ensure both FTP nodes use your FTP credential Assign your Google Drive credential to the upload node How to Use Test First: Run the workflow manually with a few test files Monitor Execution: Check the execution log for any errors Verify Upload: Confirm files appear in your Google Drive folder Schedule (Optional): Add a schedule trigger if you want automatic runs Customization Options Filter Specific File Types Add a condition after "Filter Files Only" to process only certain file extensions: {{ $json.name.endsWith('.pdf') || $json.name.endsWith('.jpg') }} Add Error Handling Insert error-handling nodes to manage failed downloads or uploads gracefully. Organize by Date Modify the Google Drive upload to create date-based folders automatically. File Size Limits Add checks for file size before attempting upload (Google Drive has limits). Troubleshooting Common Issues: FTP Connection Failed**: Check server address, port, and credentials Google Drive Upload Error**: Verify OAuth2 setup and folder permissions Files Not Found**: Ensure the FTP path exists and contains files Large Files**: Consider Google Drive's file size limitations (15GB for free accounts) Tips: Test with small files first Check n8n execution logs for detailed error messages Ensure your Google Drive has sufficient storage space Verify FTP server allows multiple concurrent connections Security Notes Never hardcode credentials in the workflow Use n8n's credential system for all authentication Consider using SFTP instead of FTP for better security Regularly rotate your FTP passwords Review Google Drive sharing permissions Next Steps Once you have this basic transfer working, you might want to: Add email notifications for successful/failed transfers Implement file deduplication checks Create logs of transferred files Set up automatic cleanup of old files Add file compression before upload
by Rizky Febriyan
How It Works This workflow automates the analysis of security alerts from Sophos Central, turning raw events into actionable intelligence. It uses the official Sophos SIEM integration tool to fetch data, enriches it with VirusTotal, and leverages Google Gemini to provide a real-time threat summary and mitigation plan via Telegram. Prerequisite (Important): This workflow is triggered by a webhook that receives data from an external Python script. You must first set up the Sophos-Central-SIEM-Integration script from the official Sophos GitHub. This script will fetch data and forward it to your n8n webhook URL. Tool Source Code: Sophos/Sophos-Central-SIEM-Integration The n8n Workflow Steps Webhook: Receives enriched event and alert data from the external Python script. IF (Filter): Immediately filters the incoming data to ensure only events with a high or critical severity are processed, reducing noise from low-priority alerts. Code (Prepare Indicator): Intelligently inspects the Sophos event data to extract the primary threat indicator. It prioritizes indicators in the following order: File Hash (SHA256), URL/Domain, or Source IP. HTTP Request (VirusTotal): The extracted indicator is sent to the VirusTotal API to get a detailed reputation report, including how many security vendors flagged it as malicious. Code (Prompt for Gemini): The raw JSON output from VirusTotal is processed into a clean, human-readable summary and a detailed list of flagging vendors. AI Agent (Google Gemini): All collected dataβthe original Sophos log, the full alert details, and the formatted VirusTotal reputationβis compiled into a detailed prompt for Gemini. The AI acts as a virtual SOC analyst to: Create a concise incident summary. Determine the risk level. Provide a list of concrete, actionable mitigation steps. Telegram: The complete analysis and mitigation plan from Gemini is formatted into a clean, easy-to-read message and sent to your specified Telegram chat. Setup Instructions Configure the external Python script to forward events to this workflow's Production URL. In n8n, create Credentials for Google Gemini, VirusTotal, and Telegram. Assign the newly created credentials to the corresponding nodes in the workflow.
by Derek Cheung
Purpose of workflow: The purpose of this workflow is to automate scraping of a website, transforming it into a structured format, and loading it directly into a Google Sheets spreadsheet. How it works: Web Scraping: Uses the Jina AI service to scrape website data and convert it into LLM-friendly text. Information Extraction: Employs an AI node to extract specific book details (title, price, availability, image URL, product URL) from the scraped data. Data Splitting: Splits the extracted information into individual book entries. Google Sheets Integration: Automatically populates a Google Sheets spreadsheet with the structured book data. Step by step setup: Set up Jina AI service: Sign up for a Jina AI account and obtain an API key. Configure the HTTP Request node: Enter the Jina AI URL with the target website. Add the API key to the request headers for authentication. Set up the Information Extractor node: Use Claude AI to generate a JSON schema for data extraction. Upload a screenshot of the target website to Claude AI. Ask Claude AI to suggest a JSON schema for extracting required information. Copy the generated schema into the Information Extractor node. Configure the Split node: Set it up to separate the extracted data into individual book entries. Set up the Google Sheets node: Create a Google Sheets spreadsheet with columns for title, price, availability, image URL, and product URL. Configure the node to map the extracted data to the appropriate columns.
by Yang
Who is this for? This workflow is for social media agencies, influencer marketers, and brand managers who need to automatically qualify TikTok creators based on their follower metrics. Itβs especially useful for teams managing influencer outreach campaigns or building talent databases. What problem is this workflow solving? Manually tracking TikTok user stats is time-consuming and inconsistent. This automation instantly pulls TikTok profile data and only saves creators who meet a defined follower threshold. It removes manual vetting, reduces spreadsheet work, and makes influencer qualification scalable. What this workflow does This workflow uses Airtable as the trigger, Dumpling AI to scrape TikTok profile information, and a logic condition to check if the profile has more than 100k followers. Qualified profiles are updated with full metrics and stored back in Airtable. Setup Airtable Setup Create a table with a field named Tik tok username Connect your Airtable account to n8n using a Personal Access Token Set up a trigger to run when a new TikTok username is added Dumpling AI Sign up at Dumpling AI Create a Dumpling AI credential in n8n using your API key The HTTP node sends the TikTok handle to Dumplingβs /get-tiktok-profile endpoint Configure Filter The IF node checks if followerCount is greater than or equal to 100000 Airtable Update If qualified, the record is updated with: ID (TikTok ID) followerCount followingCount heartCount videoCount How to customize this workflow to your needs Change the follower count threshold to fit your campaign (e.g. 10K, 500K, 1M) Add fields like engagement rate, niche tags, or scraped bio Chain additional steps like sending approved creators to your CRM or triggering outreach messages Add another filter to exclude private or inactive accounts