by Cameron Wills
Who is this for? Content creators, social media managers, digital marketers, and researchers who need to download original TikTok videos without watermarks for analysis, repurposing, or archiving purposes. What problem does this workflow solve? Downloading TikTok videos without watermarks typically requires using questionable third-party websites that may have limitations, ads, or privacy concerns. This workflow provides a clean, automated solution that can be integrated into your own systems and processes. What this workflow does This workflow automates the process of downloading TikTok videos without watermarks in three simple steps: Fetch the TikTok video page by providing the video URL Extract the raw video URL from the page's HTML data Download the original video file without watermark (Optional) Upload to Google Drive with public sharing link generation The workflow uses web scraping techniques to extract the original video source directly from TikTok's own servers, maintaining the highest possible quality without any added watermarks or branding. Setup (Est. time: 5-10 minutes) Before getting started, you'll need: n8n installation The URL of a TikTok you want to download (Optional) Google Drive API enabled in Google Cloud Console with OAuth Client ID and Client Secret credentials if you want to use the upload feature How to customize this workflow to your needs Replace the example TikTok URL with your desired video links Modify the file naming convention for downloaded videos Integrate with other nodes to process videos after downloading Create a webhook to trigger the workflow from external applications Set up a schedule to regularly download videos from specific accounts This workflow can be extended to support various use cases like trending content analysis, competitor research, creating compilation videos, or building a content library for inspiration. It provides a foundation that can be customized to fit into larger automated workflows for content creation and social media management.
by Guillaume Duvernay
Description This template provides a simple and powerful backend for adding speech-to-text capabilities to any application. It creates a dedicated webhook that receives an audio file, transcribes it using OpenAI's gpt-4o-mini model, and returns the clean text. To help you get started immediately, you'll find a complete, ready-to-use HTML code example right inside the workflow in a sticky note. This code creates a functional recording interface you can use for testing or as a foundation for your own design. Who is this for? Developers:** Quickly add a transcription feature to your application by calling this webhook from your existing frontend or backend code. No-code/Low-code builders:** Embed a functional audio recorder and transcription service into your projects by using the example code found inside the workflow. API enthusiasts:** A lean, practical example of how to use n8n to wrap a service like OpenAI into your own secure and scalable API endpoint. What problem does this solve? Provides a ready-made API:** Instantly gives you a secure webhook to handle audio file uploads and transcription processing without any server setup. Decouples frontend from backend:** Your application only needs to know about one simple webhook URL, allowing you to change the backend logic in n8n without touching your app's code. Offers a clear implementation pattern:** The included example code provides a working demonstration of how to send an audio file from a browser and handle the response—a pattern you can replicate in any framework. How it works This solution works by defining a clear API contract between your application (the client) and the n8n workflow (the backend). The client-side technique: Your application's interface records or selects an audio file. It then makes a POST request to the n8n webhook URL, sending the audio file as multipart/form-data. It waits for the response from the webhook, parses the JSON body, and extracts the value of the Transcript key. You can see this exact pattern in action in the example code provided in the workflow's sticky note. The n8n workflow (backend): The Webhook node catches the incoming POST request and grabs the audio file. The HTTP Request node sends this file to the OpenAI API. The Set node isolates the transcript text from the API's response. The Respond to Webhook node sends a clean JSON object ({"Transcript": "your text here..."}) back to your application. Setup Configure the n8n workflow: In the Transcribe with OpenAI node, add your OpenAI API credentials. Activate the workflow to enable the endpoint. Click the "Copy" button on the Webhook node to get your unique Production Webhook URL. Integrate with the frontend: Inside the workflow, find the sticky note labeled "Example Frontend Code Below". Copy the complete HTML from the note below it. ⚠️ Important: In the code you just copied, find the line const WEBHOOK_URL = 'YOUR WEBHOOK URL'; and replace the placeholder with the Production Webhook URL from n8n. Save the code as an HTML file and open it in your browser to test. Taking it further Save transcripts:* Add an *Airtable* or *Google Sheets** node to log every transcript that comes through the workflow. Error handling:** Enhance the workflow to catch potential errors from the OpenAI API and respond with a clear error message. Analyze the transcript:* Add a *Language Model** node after the transcription step to summarize the text, classify its sentiment, or extract key entities before sending the response.
by Airtop
Automating LinkedIn Company URL Verification Use Case This automation verifies that a given LinkedIn URL actually belongs to a company by comparing the website listed on their LinkedIn page against the expected company domain. It is essential for ensuring data accuracy in lead qualification, enrichment, and CRM updates. What This Automation Does Input Parameters Company LinkedIn**: The LinkedIn URL to be verified. Company Domain**: The expected domain (e.g., example.com) for validation. Airtop Profile (connected to LinkedIn)**: Airtop Profile with LinkedIn authentication. Output Confirmation whether the LinkedIn page corresponds to the provided domain. Returns the verified LinkedIn URL if the match is confirmed. How It Works Extracts the website URL from the specified LinkedIn company profile. Compares the extracted URL with the provided company domain. If the domain is contained in the extracted website, the LinkedIn profile is confirmed as valid. Returns the original LinkedIn URL if the match is successful. Setup Requirements Airtop API Key LinkedIn-authenticated Airtop Profile Next Steps Use for LinkedIn Discovery Validation**: Ensure correctness after automated LinkedIn page discovery. Combine with CRM Updates**: Prevent incorrect LinkedIn links from being stored in CRM. Automate in Data Pipelines**: Use this as a validation gate before enrichment or scoring steps.
by Giacomo Lanzi
Extract Title tag and meta description from url for SEO analysis. How it works The workflows takes records from Airtable, get the url in the records and extract from the related webpage the title tag (<title>) and meta description (<meta name="description" content="Some content">). If title tag and/or meta description tag isn't available on the webpage, the result will be empty. Setup Set a Base in Airtable with a table with the following structure: url (field type url), title tag (field type text string), meta desc (field type text field) Minimum suggested table structure is: url (https://example.com), title (Title example), meta desc* (This is the meta description of the example page) Connect Airtable to both Airtable nodes in the template and, with the following formula, get all the records that miss title tag and meta desc. Formula: AND(url != "", {title tag} = "", {meta desc} = "") Insert the url to be analyzed in the table in the field url and let the workflow do the rest. Extra You can also calculate the length for title tag and meta desc using formula field inside Airtable. This is the formula: LEN({title tag}) or LEN({meta desc}) You can automate the process calling a Webhook from Airtable. For this, you need an Airtable paid plan.
by Viktor Klepikovskyi
Google Sheets UI for Workflow Control This n8n template provides a practical and efficient way to manage your n8n workflows using Google Sheets as a user-friendly interface. It demonstrates how to leverage a simple spreadsheet to control inputs, capture outputs, and track the processing status of individual data rows, offering a clear and visual overview of your automation tasks. Purpose of This Template: The primary purpose of this template is to illustrate how Google Sheets can serve as a dynamic UI for your n8n automations. It's designed for n8n users who need: A structured method to feed specific data into their workflows. The ability to selectively trigger workflow execution based on data status. A centralized place to view and store workflow outputs alongside original inputs. A simple, no-code solution for managing workflow data without building custom applications. Setup Instructions: To use this template, follow these steps: Create a Google Sheet: Set up a new Google Sheet (see the template here) with three columns: Color, Status, and Number. Populate the Color column with some sample data (e.g., color names) and set the Status for the rows you want to process to READY. Import the n8n Workflow: Import this n8n template into your n8n instance. Configure Google Sheets Nodes: For the first Google Sheets node (Read operation), ensure it's connected to your newly created Google Sheet and configured to read rows where the Status column is READY. You will need to authenticate your Google Sheets account. For the second Google Sheets node (Update operation), ensure it's also connected to the same Google Sheet. The node should automatically map the row_number, Number, and Status fields from the preceding nodes. Execute the Workflow: Run the workflow. Observe how it reads READY rows, processes them (calculates string length), and updates the Number and Status columns in your Google Sheet to DONE. Control Execution: To process new data, simply add new rows to your Google Sheet and set their Status to READY. Rerunning the workflow will then only process these new entries. For more details and context on this approach, you can refer to the related blog post here.
by Ria
This workflow demonstrates how to use the workflowStaticData() function to set any type of variable that will persist within workflow executions. https://docs.n8n.io/code/cookbook/builtin/get-workflow-static-data/ This can be useful for example when working with access tokens that expire after a certain time period. Using staticData we can keep a record of that access token and the expiry time and build our workflow logic around it. Important Static Data only persists across production executions, i.e. triggered by Webhooks or Schedule Triggers (not manual executions!) For this the workflow will have to be activated. Setup configure HTTP Request node to fetch access token from your API (optional) activate workflow test the workflow with the webhook production link you can check the population of the static data in the single executions Feedback If you found this useful or want to report some missing information - I'd be happy to hear from you at ria@n8n.io
by Mihai Farcas
This n8n workflow automates the process of saving web articles or links shared in a chat conversation directly into a Notion database, using Google's Gemini AI and Browserless for web scraping. Who is this AI automation template for? It's useful for anyone wanting to reduce manual copy-pasting and organize web findings seamlessly within Notion. A smarter web clipping tool! What this AI automation workflow does Starts when a message is received Uses a Google Gemini AI Agent node to understand the context and manage the subsequent steps. It identifies if a message contains a request to save an article/link. If a URL is detected, it utilizes a tool configured with the Browserless API (via the HTTP Request node) to scrape the content of the web page. Creates a new page in a specified Notion database, populating it with thea summary scraped content, in a specific format, never leaving out any important details. It also saves the original URL, smart tags, publication date, and other metadata extracted by the AI. Posts a confirmation message (e.g., to a Discord channel) indicating whether the article was saved successfully or if an error occurred. Setup Import Workflow: Import this template into your n8n instance. Configure Credentials & Notion Database: Notion Database: Create or designate a Notion database (like the example "Knowledge Database") where articles will be saved. Ensure this database has the following properties (fields): Name (Type: Text) - This will store the article title. URL (Type: URL) - This will store the original article link. Description (Type: Text) - This can store the AI-generated summary. Tags (Type: Multi-select) - Optional, for categorization. Publication Date (Type: Date) - *Optional, store the date the article was published. Ensure the n8n integration has access to this specific database. If you require a different format to the Notion Database, not that you will have to update the Notion tool configuration in this n8n workflow accordingly. Notion Credential: Obtain your Notion API key and add it as a Notion credential in n8n. Select this credential in the save_to_notion tool node. Configure save_to_notion Tool: In the save_to_notion tool node within the workflow, set the 'Database ID' field to the ID of the Notion database you prepared above. Map the workflow data (URL, AI summary, etc.) to the corresponding database properties (URL, Description, etc.). In the blocks section of the notion tool, you can define a custom format for the research page, allowing the AI to fill in the exact details you want extracted from any web page! Google Gemini AI: Obtain your API key from Google AI Studio or Google Cloud Console (if using Vertex AI) and add it as a credential. Select this credential in the "Tools Agent" node. Discord (or other notification service): If using Discord notifications, create a Webhook URL (instructions) or set up a Bot Token. Add the credential in n8n and select it in the discord_notification tool node. Configure the target Channel ID. Browserless/HTTP Request: Cloud: Obtain your API key from Browserless and configure the website_scraper HTTP Request tool node with the correct API endpoint and authentication header. Self-Hosted: Ensure your Browserless Docker container is running and accessible by n8n. Configure the website_scraper HTTP Request tool node with your self-hosted Browserless instance URL. Activate Workflow: Save test and activate the workflow. How to customize this workflow to your needs Change AI Model:** Experiment with different AI models supported by n8n (like OpenAI GPT models or Anthropic Claude) in the Agent node if Gemini 2.5 Pro doesn't fit your needs or budget, keeping in mind potential differences in context window size and processing capabilities for large content. Modify Notion Saving:** Adjust the save_to_notion tool node to map different data fields (e.g., change the summary style by modifying the AI prompt, add specific tags, or alter the page content structure) to your Notion database properties. Adjust Scraping:** Modify the prompt/instructions for the website_scraper tool or change the parameters sent to the Browserless API if you need different data extracted from the web pages. You could also swap Browserless for another scraping service/API accessible via the HTTP Request node.
by Niklas Hatje
Use Case In most companies, employees have a lot of great ideas. That was the same for us at n8n. We wanted to make it as easy as possible to allow everyone to add their ideas to some formatted database - it should be somewhere where everyone is all the time and could add a new idea without much extra effort. Since we're using Slack, this seemed to be the perfect place to easily add ideas and collect them in Notion. What this workflow does This workflow waits for a webhook call within Slack, that gets fired when users use the /idea command on a bot that you will create as part of this template. It then checks the command, adds the idea to Notion, and notifies the user about the newly added idea as you can see below: Creating your Slack bot Visit https://api.slack.com/apps, click on New App and choose a name and workspace. Click on OAuth & Permissions and scroll down to Scopes -> Bot token Scopes Add the chat:write scope Head over to Slash Commands and click on Create New Command Use /idea as the command Copy the test URL from the Webhook node into Request URL Add whatever feels best to the description and usage hint Go to Install app and click install Setup Add a Database in Notion with the columns Name and Creator Add your Notion credentials and add the integration to your Notion page. Fill the setup node below Create your Slack app (see other sticky) Click Test workflow and use the /idea comment in Slack Activate the workflow and exchange the Request URL with the production URL from the webhook How to adjust it to your needs You can adjust the table in Notion and for example, add different types of ideas or areas that they impact You might wanna add different templates in Notion to make it easier for users to fill their ideas with details Rename the Slack command as it works best for you How to enhance this workflow At n8n we use this workflow in combination with some others. E.g. we have the following things on top: We additionally have a /bug Slack command that adds a new bug to Linear. Here we're using AI to classify the bugs and move it to the right team. (see this template and this template) We also added other types, like /pain to be less solution-driven To make it easier for everyone to give input, we added a Votes column that allows everyone to vote on ideas/pain points in the list We're also running a workflow once a week that highlights the most popular new ideas and the most active voters (see here)
by Daniel Shashko
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automates the process of scraping product data from e-commerce websites and using it to fine-tune a custom OpenAI GPT model for generating high-quality marketing copy and product descriptions. Main Use Cases Fine-tune OpenAI models with real product data from hundreds of supported e-commerce websites for marketing content generation. Create custom AI models specialized in writing compelling product descriptions across different industries and platforms. Automate the entire pipeline from data collection to model training using Bright Data's extensive scraper library. Generate marketing copy using your custom-trained model via an interactive chat interface. How it works The workflow operates in two main phases: model training and model usage, organized into these stages: Data Collection & Processing Manually triggered to start the fine-tuning process. Uses Bright Data's web scraper to extract product information from any supported e-commerce platform (Amazon, eBay, Shopify stores, Walmart, Target, and hundreds of other websites). Collects product titles, brands, features, descriptions, ratings, and availability status from your chosen platform. Easily customizable to scrape from different websites by simply changing the dataset configuration and product URLs. Training Data Preparation A Code node processes the scraped product data to create training examples in OpenAI's required JSONL format. For each product, generates a complete training example with: System message defining the AI's role as a marketing assistant. User prompt containing specific product details (title, brand, features, original description snippet). Assistant response providing an ideal marketing description template. Compiles all training examples into a single JSONL file ready for OpenAI fine-tuning. Model Fine-Tuning Uploads the training file to OpenAI using the OpenAI File Upload node. Initiates a fine-tuning job via HTTP Request to OpenAI's fine-tuning API using the GPT-4o-mini model as the base. The fine-tuning process runs on OpenAI's servers to create your custom model. Interactive Chat Interface Provides a chat trigger that allows real-time interaction with your fine-tuned model. An AI Agent node connects to your custom-trained OpenAI model. Users can chat with the model to generate product descriptions, marketing copy, or other content based on the training. Custom Model Integration The OpenAI Chat Model node is configured to use your specific fine-tuned model ID. Delivers responses trained on your product data for consistent, high-quality marketing content. Summary Flow: Manual Trigger → Scrape E-commerce Products (Bright Data) → Process & Format Training Data (Code) → Upload Training File (OpenAI) → Start Fine-Tuning Job (HTTP Request) | Parallel: Chat Trigger → AI Agent → Custom Fine-Tuned Model Response Benefits: Fully automated pipeline from raw product data to trained AI model. Works with hundreds of different e-commerce websites through Bright Data's extensive scraper library. Creates specialized models trained on real e-commerce data for authentic marketing copy across various industries. Scalable solution that can be adapted to different product categories, niches, or websites. Interactive chat interface for immediate access to your custom-trained model. Cost-effective fine-tuning using OpenAI's most efficient model (GPT-4o-mini). Easily customizable with different websites, product URLs, training prompts, and model configurations. Setup Requirements: Bright Data API credentials for web scraping (supports hundreds of e-commerce websites). OpenAI API key with fine-tuning access. Replace placeholder credential IDs and model IDs with your actual values. Customize the product URLs list and Bright Data dataset for your specific website and use case. The workflow can be adapted for any e-commerce platform supported by Bright Data's scraping infrastructure.
by bangank36
This workflow restores all n8n instance workflows from GitHub backups using the n8n API node. It complements the Backup Your Workflows to GitHub template by allowing users to seamlessly restore previously saved workflows. How It Works The workflow fetches workflows stored in a GitHub repository and imports them into your n8n instance. Setup Instructions To configure the workflow, update the Globals node with the following values: repo.owner** – Your GitHub username repo.name** – The name of your GitHub repository storing the workflows repo.path** – The folder path within the repository where workflows are stored For example, if your GitHub username is john-doe, your repository is named n8n-backups, and workflows are stored in a workflows/ folder, you would set: repo.owner → john-doe repo.name → n8n-backups repo.path → workflows/ Required Credentials GitHub API** – Access to your repository n8n API** – To import workflows into your n8n instance Who Is This For? This template is ideal for users who want to restore their workflows from GitHub backups, ensuring easy migration and recovery in case of data loss. Check out my other templates: 👉 My n8n Templates
by Agent Studio
Overview This workflow provides Retell agent builders with a simple way to populate dynamic variables using n8n. The workflow fetches user information from a Google Sheet based on the phone number and sends it back to Retell. It is based on Retell's Inbound Webhook Call. Retell is a service that lets you create Voice Agents that handle voice calls simply, based on a prompt or using a conversational flow builder. Who is it for For builders of Retell's Voice Agents who want to make their agents more personalized. Prerequisites Have a Retell AI Account Create a Retell agent Purchase a phone number and associate it with your agent Create a Google Sheets - for example, make a copy of this one. Your Google Sheet must have at least one column with the phone number. The remaining columns will be used to populate your Retell agent’s dynamic variables. All fields are returned as strings to Retell (variables are replaced as text) How it works The webhook call is received from Retell. We filter the call using their whitelisted IP address. It extracts data from the webhook call and uses it to retrieve the user from Google Sheets. It formats the data in the response to match Retell's expected format. Retell uses this data to replace dynamic variables in the prompts. How to use it See the description for screenshots! Set the webhook name (keep it as POST). Copy the Webhook URL (e.g., https://your-instance.app.n8n.cloud/webhook/retell-dynamic-variables) and paste it into Retell's interface. Navigate to "Phone Numbers", click on the phone number, and enable "Add an inbound webhook". In your prompt (e.g., "welcome message"), use the variable with this syntax: {{variable_name}} (see Retell's documentation). These variables will be dynamically replaced by the data in your Google Sheet. Notes In Google Sheets, the phone number must start with '+. Phone numbers must be formatted like the example: with the +, extension, and no spaces. You can use any database—just replace Google Sheets with your own, making sure to keep the phone number formatting consistent. 👉 Reach out to us if you're interested in analysing your Retell Agent conversations.
by Gareth B. Davies
An automated backup solution designed for self-hosted n8n users to automatically backup their workflows to Bitbucket, leveraging Bitbucket's free private repository offering. Perfect for maintaining version control of your n8n workflows without additional costs. How it works: Runs on a regular schedule to check all workflows in your n8n instance Compares each workflow with its version in Bitbucket Only uploads workflows that are new or have changed Uses basic rate limiting to stay within Bitbucket's API limits Formats filenames for easy tracking and includes timestamps in commit messages Handles errors gracefully with automatic retries Set up steps (10-15 minutes): Create a free Bitbucket account and private repository Create a Bitbucket App Password with repository write access Add Bitbucket credentials to n8n (using your username and app password) Set up n8n API access (generate API key in your n8n instance) Configure your Bitbucket workspace and repository names in the Set node Optional: Adjust the backup schedule (default: 2 AM daily) Perfect for n8n self-hosters who want: Version control for their workflows Automated daily backups Free private repository storage Easy workflow recovery Change tracking over time The workflow includes basic error handling and rate limiting to ensure reliable backups even with larger numbers of workflows. Adjust your timing based on https://support.atlassian.com/bitbucket-cloud/docs/api-request-limits/.