by Ranjan Dailata
Who this is for? The LinkedIn Company Story Generator is an automated workflow that extracts company profile data from LinkedIn using Bright Data's web scraping infrastructure, then transforms that data into a professionally written narrative or story using a language model (e.g., OpenAI, Gemini). The final output is sent via webhook notification, making it easy to publish, review, or further automate. This workflow is tailored for: Marketing Professionals**: Seeking to generate compelling company narratives for campaigns. Sales Teams**: Aiming to understand potential clients through summarized company insights. Content Creators**: Looking to craft stories or articles based on company data. Recruiters**: Interested in obtaining concise overviews of companies for talent acquisition strategies. What problem is this workflow solving? Manually gathering and summarizing company information from LinkedIn can be time-consuming and inconsistent. This workflow automates the process, ensuring: Efficiency**: Quick extraction and summarization of company data. Consistency**: Standardized summaries for uniformity across use cases. Scalability**: Ability to process multiple companies without additional manual effort. What this workflow does The workflow performs the following steps: Input Acquisition**: Receives a company's name or LinkedIn URL as input. Data Extraction**: Utilizes Bright Data to scrape the company's LinkedIn profile. Information Parsing**: Processes the extracted HTML content to retrieve relevant company details. Summarization**: Employs AI Google Gemini to generate a concise company story. Output Delivery**: Sends the summarized content to a specified webhook or email address. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Variations: Modify the **Set LinkedIn URL node to accept a different company LinkedIn URL. Data Points**: Adjust the HTML Data Extractor Node to retrieve additional details like employee count, industry, or headquarters location. Summarization Style**: Customize the AI prompt to generate summaries in different tones or formats (e.g., formal, casual, bullet points). Output Destinations**: Configure the output node to send summaries to various platforms, such as Slack, CRM systems, or databases.
by Hubschrauber
Fetches workflow definitions from within n8n, selecting only the ones that have one or more (configurable) assigned tags and then: Derives a suitable backup filename by reducing the workflow name to a string with alphanumeric characters and no-spaces Note: This isn't bulletproof, but works as long as workflow names aren't too crazy. Determines which workflows need to be backed up based on whether each one: has been modified. (Note: Even repositioning a node counts.) ...or... is new. (Note: Renaming counts as this.) Commits JSON copies of each workflow, as necessary, to a Gitlab repository with a generated, date-stamped commit message. Setup Credentials Create a Gitlab Credentials item and assign it to all Gitlab nodes. Create an n8n Credentials item and assign it to the n8n node Note: This was tested with http://localhost:5678/api/v1 but should work with any reachable n8n instance and API key. Modify these values in the "Globals" Node gitlab_owner - {{your gitlab account}} gitlab_project - {{ your gitlab project name }} gitlab_workflow_path - {{ subdirectory in the project where backup files should be saved/committed }} tags_to_match_for_backup - {{tag(s) to match for backup selection}} *ALERT: According to the n8n node's Filters -> tags field annotations, and API documentation, this supports a CSV list of multiple tags (e.g. tag1,tag2), but the API behavior requires workflows to have all-of the listed tags, not any-of them.* See: https://github.com/n8n-io/n8n/issues/10348 TL/DR - Don't expect a multiple tag list to be more inclusive. Possible workaround: To match more than one tag value, duplicate the n8n node into multiple single-tag matches, or split and iterate multiple values, and merge the results. Possible Enhancements Make the branch ("Reference") for all the gitlab nodes configurable. Fixed on all as "main" in the template. Add an n8n node to generate an audit and store the output in gitlab along with the backups. Extend the workflow at the end to create a Gitlab release/tag whenever any backup files are actually updated or created.
by Sk developer
🎥 Bulk TikTok Video Download Without Watermark to Google Drive This workflow automates the process of downloading TikTok videos and uploading them to Google Drive. It reads TikTok URLs from a Google Sheet, downloads the video using the TikTok Video Downloader — a tool for downloading TikTok videos without watermark in HD quality — uploads it to Drive, makes it public, and updates the same sheet with the Drive link. 🔧 What It Does ✅ Manually triggered when ready to run. 📄 Reads TikTok URLs from a Google Sheet. 🔁 Loops through each URL one at a time. 🌐 Fetches video download links using the TikTok Video Downloader — a reliable TikTok video downloader without watermark. ⬇️ Downloads each video in high-definition (HD) format using the direct media link. ☁️ Uploads the video to Google Drive. 🔓 Sets public sharing permission for the video. ✏️ Updates the original Google Sheet with the public Drive URL. 📋 Google Sheet Example Make sure your sheet has at least these columns: | url | drive_link (to be auto-filled) | |-------------------------------------|--------------------------------| | https://www.tiktok.com/@user1... | (blank initially) | | https://www.tiktok.com/@user2... | (blank initially) | > The workflow reads from url and fills in drive_link after upload. 🧩 Nodes Used | Node Name | Type | Purpose | |------------------------------|-------------------|-------------------------------------------------------| | When clicking ‘Execute’ | Manual Trigger | Starts the workflow manually | | Get Data From Google Sheets | Google Sheets | Fetches rows (TikTok URLs) | | Loop Over Items | Split In Batches | Iterates over each row | | Call TikTok Downloader | HTTP Request | Gets video download link from TikTok Video Downloader | | Wait | Wait | Optional delay to prevent overload | | Download File | HTTP Request | Downloads HD video using media link | | Upload File In Google Drive | Google Drive | Uploads the video to Google Drive | | Set Public Permission | Google Drive | Makes the uploaded file publicly accessible | | Update Row In Google Sheet | Google Sheets | Adds Drive link to the same row | | Sleep | Wait | Small delay between each iteration | 📝 Requirements ✅ Google API credentials (Service Account) with access to: Google Sheets Google Drive 🔐 RapidAPI Key for TikTok Video Downloader – a TikTok video downloader without watermark (HD supported) 🗂 A Google Sheet with a url column containing TikTok video URLs 🧩 Challenges Solved | ❗ Challenge | ✅ Solution | |-------------|-------------| | TikTok video URLs often have watermarks and low quality | Used TikTok Video Downloader API for HD + no watermark download links | | No easy way to bulk download and organize TikToks | Automated fetching, downloading, and uploading using n8n + Google Drive | | Manual video saving and re-uploading to Drive is time-consuming | Eliminated all manual steps with a fully automated workflow | | Tracking which videos are already processed | Automatically updates the Google Sheet row with the final Drive link | | Drive files are private by default | Automatically sets public sharing permission on uploaded videos | | Risk of API rate limits or throttling | Added Wait nodes and batch processing to avoid overload | 🎁 Benefits | 🌟 Benefit | 💬 Description | |------------|----------------| | 🚀 Saves Time | Fully automates a previously manual workflow | | 🎥 High Quality Content | Videos downloaded are HD + watermark-free — ready for reuse or archives | | 🔁 Reusable Setup | Can process unlimited TikTok URLs via the Google Sheet | | 📊 Organized Output | Keeps track of source URL and uploaded Drive link in a single sheet | | 🔐 Secure but Shareable | Drive links are auto-shared publicly while remaining under your control | | 🔄 Scalable | Can be run daily, weekly, or triggered by new rows — completely scalable | | 💸 Cost-Effective | No need for paid tools or manual freelancers — runs on n8n + free APIs | 💡 Use Cases Content curation from TikTok Archiving user-submitted TikToks Automating social-to-cloud workflows Bulk migration of video content Saving TikTok videos in HD without watermark for sharing or archiving 📌 Tips Replace manual trigger with Cron for full automation. Use the TikTok Video Downloader responsibly — check API limits. Store metadata (e.g., uploader, hashtags) in additional Google Sheet columns. This tool helps ensure you're always downloading high-quality TikTok videos without watermark.
by Jimleuk
This n8n template shows you how to create an MCP server out of your existing n8n workflows. With this, any MCP client connected can get more done with powerful end-to-end workflows rather than just simple tools. Designing agent tools for outcome rather than utility has been a long recommended practice of mine and it applies well when it comes to building MCP servers; In gist, agents to be making the least amount of calls possible to complete a task. This is why n8n can be a great fit for MCP servers! This template connects your agent/MCP client (like Claude Desktop) to your existing workflows by allowing the AI to discover, manage and run these workflows indirectly. How it works An MCP trigger is used and attaches 4 custom workflow tools to discover and manage existing workflows to use and 1 custom workflow tool to execute them. We'll introduce an idea of "available" workflows which the agent is allowed to use. This will help limit and avoid some issues when trying to use every workflow such as clashes or non-production. The n8n node is a core node which taps into your n8n instance API and is able to retrieve all workflows or filter by tag. For our example, we've tagged the workflows we want to use with "mcp" and these are exposed through the tool "search workflows". Redis is used as our main memory for keeping track of which workflows are "available". The tools we have are "add Workflow", "remove workflow" and "list workflows". The agent should be able to manage this autonomously. Our approach to allow the agent to execute workflows is to use the Subworkflow trigger. The tricky part is figuring out the input schema for each but was eventually solved by pulling this information out of the workflow's template JSON and adding it as part of the "available" workflow's description. To pass parameters through the Subworkflow trigger, we can do so via the passthrough method - which is that incoming data is used when parameters are not explicitly set within the node. When running, the agent will not see the "available" workflows immediately but will need to discover them via "list" and "search". The human will need to make the agent aware that these workflows will be preferred when answering queries or completing tasks. How to use First, decide which workflows will be made visible to the MCP server. This example uses the tag of "mcp" but you can all workflows or filter in other ways. Next, ensure these workflows have Subworkflow triggers with input schema set. This is how the MCP server will run them. Set the MCP server to "active" which turns on production mode and makes available to production URL. Use this production URL in your MCP client. For Claude Desktop, see the instructions here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop. There is a small learning curve which will shape how you communicate with this MCP server so be patient and test. The MCP server will work better if there is a focused goal in mind ie. Research and report, rather than just a collection of unrelated tools. Requirements N8N API key to filter for selected workflows. N8N workflows with Subworkflow triggers! Redis for memory and tracking the "available" workflows. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If your targeted workflows do not use the subworkflow trigger, it is possible to amend the executeTool to use HTTP requests for webhooks. Managing available workflows helps if you have many workflows where some may be too similar for the agent. If this isn't a problem for you however, feel free to remove the concept of "available" and let the agent discover and use all workflows!
by Roman Rozenberger
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who's it for Content creators, marketers, and researchers who need to monitor multiple RSS feeds and get AI-generated summaries without manual work. How it works This workflow automatically monitors RSS feeds, filters new articles from the last X days, checks for duplicates, and generates structured AI summaries. It fetches full article content, converts HTML to markdown, and uses Gemini AI to create consistent summaries with quick takeaways, key points, and practical insights. All data is saved to Google Sheets for easy access and sharing. The system processes RSS feeds in batches, ensuring no duplicate articles are processed twice by checking existing URLs in your Google Sheets. Each new article gets a comprehensive AI summary that includes the main message, key takeaways, important points, and practical applications. Requirements Google Sheets access OpenRouter API key for Gemini AI model or other language model RSS feed URLs to monitor How to set up Copy the template Google Sheet, add your RSS feeds in the "RSS FEEDS" tab, configure Google Sheets and OpenRouter credentials in n8n, and adjust the time filter in the Settings node. The workflow can run manually or on schedule every hour. How to customize Modify AI prompts for different summary styles, change the time filter duration, add more data fields to Google Sheets, or switch to a different AI model in the LLM Chat Model node.
by InfraNodus
Optimize Your Top Performing Website Content with Google Analytics, Firecrawl, and InfraNodus This templates helps you extract** the top performing pages from your website using Google Analytics scrape** the content of the pages using Firecrawl API (HTTP node provided) build a knowledge graph* for all these pages with the *topics* and *gaps** identified using InfraNodus understand the main concepts and topical clusters in your top-performing content, so you can create more of it, while also identifying the content gaps — structural holes between the topics that you can use to generate new content ideas have access to a knowledge graph visualization of your top performing content to explore it using the interactive network interface How it works This template uses the InfraNodus to visualize and analyze your top performing content. It will extract the top pages from the Google Analytics data for the website you choose and scrape their text content using the high-quality Firecrawl API. Then it will ingest every page into an InfraNodus graph you specify. The graph can be used to explore the content visually. The insights from the graph, such as the main topics and gaps between them will be shown to you in the end of the workflow. You can use these insights to understand what kind of content you should focus on creating to get the highest number of views* and to establish *topical authority* in your area, which is good for *SEO* and *LLM optimization** — focusing on the topics identified in the top content discover the content gaps — which topics are not connected yet that you could link with new content ideas and publish — this caters to your audience's interests, but connects your existing ideas in a new way. So you deliver the content that's relevant but also novel. Here's a description step by step: Note:* you can replace the PDF to Text convertor node with a better quality *PDF convertor* from ConvertAPI which respects the original file layout and doesn't split text into small chunks Trigger the workflow Extract a list of top (25, 50) pages from your Google Analytics account (you'll need to connect it via the Google Cloud API) Fix the extracted data and add a correct URL prefix to each page (if your Analytics has relative paths only Loop through each page extracted Extract the text content of every page using the high-quality Firecrawl API Ingest the text content into the InfraNodus graph that you specify Once all the pages are ingested into the InfraNodus graph, access the AI insights endpoint in InfraNodus and get the information about the main topics and gaps Display this information to the user How to use You need an InfraNodus API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Requirements An InfraNodus account and API key Optional: A Google Analytics account for your property (alternatively, you can modify this workflow to provide a list of the most popular pages) Optional: A Google Cloud API access (to access the data from Google Analytic saccount — follow the n8n instructions) Optional: A Firecrawl API key API key for better quality web page scraping (otherwise, use the standard HTTP to Text node from n8n) Customizing this workflow You can customize this workflow by using a list of the URL pages you want to analyze from a Google sheet. Alternatively, you can use the Google SERP node to extract top search results for a query and get the main topics for them. For support and feedback, please, contact us at https://support.noduslabs.com To learn more about InfraNodus: https://infranodus.com
by HoangSP
Name: AI-Powered Research Agent using Perplexity Sonar Description: This workflow acts as an AI-powered research assistant using the Perplexity Sonar model. When triggered by another workflow, it sends a user-defined prompt to the Perplexity API to retrieve up-to-date search results. The response is then parsed into a clean format for downstream processing. How it Works: Trigger: Activated from another workflow via Execute Workflow Trigger. Prompt Setup: Sets a system role message and user query dynamically. API Call: Sends a POST request to Perplexity's /chat/completions endpoint with your credentials. Response Handling: Extracts the message content from the API response. Output: Returns the result, ready for display or further processing. Requirements: A Perplexity AI API Key Set up authentication via Header Auth with Bearer token Ensure your account allows outbound HTTP requests in n8n Customization Tips: Modify the system prompt to suit your research domain Chain this workflow with other automation like blog creation, summaries, etc. Replace the output handling logic to fit into Google Sheets, Notion, or Telegram
by Artem Boiko
Revit to HTML Quantity Takeoff Generator Automates extraction of wall quantities from Revit models and creates a professional interactive HTML report. Key Features Automated wall quantity analysis Calculates volumes by wall type ("Type Name") Generates interactive HTML QTO report Includes summary statistics: total elements, total and average volumes Provides detailed breakdown by element type How it works Upload a Revit file as input Workflow extracts wall quantities and types Creates and saves a ready-to-share HTML dashboard with QTO data No API keys required Runs offline Output is a professional, ready-to-use HTML report
by n8n Team
This workflow pushes Stripe charges to HubSpot contacts. It uses the Stripe API to get all charges and the HubSpot API to update the contacts. The workflow will create a new HubSpot property to store the total amount charged. If the property already exists, it will update the property. Prerequisites Stripe credentials. HubSpot credentials. How it works On a schedule, check if the property exists in HubSpot. If it doesn't exist, create it. The default schedule is once a day at midnight. Once property is acertained, the first Stripe node gets all charges. Once the charges are returned, the second Stripe node gets extra customer information. Once the customer information is returned, Merge data node will merge the customer information with the charges so that the next node Aggregate totals can calculate the total amount charged per contact. Once we have the total amount charged per contact, the Create or update customer node will create a new HubSpot contact if it doesn't exist or update the contact if it does exist with the total amount charged.
by Custom Workflows AI
Introduction The Content SEO Audit Workflow is a powerful automated solution that generates comprehensive SEO audit reports for websites. By combining the crawling capabilities of DataForSEO with the search performance metrics from Google Search Console, this workflow delivers actionable insights into content quality, technical SEO issues, and performance optimization opportunities. The workflow crawls up to 1,000 pages of a website, analyzes various SEO factors including metadata, content quality, internal linking, and search performance, and then generates a professional, branded HTML report that can be shared directly with clients. The entire process is automated, transforming what would typically be hours of manual analysis into a streamlined workflow that produces consistent, thorough results. This workflow bridges the gap between technical SEO auditing and practical, client-ready deliverables, making it an invaluable tool for SEO professionals and digital marketing agencies. Who is this for? This workflow is designed for SEO consultants, digital marketing agencies, and content strategists who need to perform comprehensive content audits for clients or their own websites. It's particularly valuable for professionals who: Regularly conduct SEO audits as part of their service offerings Need to provide branded, professional reports to clients Want to automate the time-consuming process of content analysis Require data-driven insights to inform content strategy decisions Users should have basic familiarity with SEO concepts and metrics, as well as a basic understanding of how to set up API credentials in n8n. While no coding knowledge is required to run the workflow, users should be comfortable with configuring workflow parameters and following setup instructions. What problem is this workflow solving? Content audits are essential for SEO strategy but are traditionally labor-intensive and time-consuming. This workflow addresses several key challenges: Manual Data Collection: Gathering data from multiple sources (crawlers, Google Search Console, etc.) typically requires hours of work. This workflow automates the entire data collection process. Inconsistent Analysis: Manual audits can suffer from inconsistency in methodology. This workflow applies the same comprehensive analysis criteria to every page, ensuring thorough and consistent results. Report Generation: Creating professional, client-ready reports often requires additional design work after the analysis is complete. This workflow generates a fully branded HTML report automatically. Data Integration: Correlating technical SEO issues with actual search performance metrics is difficult when working with separate tools. This workflow seamlessly integrates crawl data with Google Search Console metrics. Scale Limitations: Manual audits become increasingly difficult with larger websites. This workflow can efficiently process up to 1,000 pages without additional effort. What this workflow does Overview The Content SEO Audit Workflow crawls a specified website, analyzes its content for various SEO issues, retrieves performance data from Google Search Console, and generates a comprehensive HTML report. The workflow identifies issues in five key categories: status issues (404 errors, redirects), content quality (thin content, readability), metadata SEO (title/description issues), internal linking (orphan pages, excessive click depth), and performance (underperforming content). The final report includes executive summaries, detailed issue breakdowns, and actionable recommendations, all branded with your company's colors and logo. Process Initial Configuration: The workflow begins by setting parameters including the target domain, crawl limits, company information, and branding colors. Website Crawling: The workflow creates a crawl task in DataForSEO and periodically checks its status until completion. Data Collection: Once crawling is complete, the workflow: Retrieves the raw audit data from DataForSEO Extracts all URLs with status code 200 (successful pages) Queries Google Search Console API for each URL to get clicks and impressions data Identifies 404 and 301 pages and retrieves their source links Data Analysis: The workflow analyzes the collected data to identify issues including: Technical issues: 404 errors, redirects, canonicalization problems Content issues: thin content, outdated content, readability problems SEO metadata issues: missing/duplicate titles and descriptions, H1 problems Internal linking issues: orphan pages, excessive click depth, low internal links Performance issues: underperforming pages based on GSC data Report Generation: Finally, the workflow: Calculates a health score based on the severity and quantity of issues Generates prioritized recommendations Creates a comprehensive HTML report with interactive tables and visualizations Customizes the report with your company's branding Provides the report as a downloadable HTML file Setup To set up this workflow, follow these steps: Import the workflow: Download the JSON file and import it into your n8n instance. Configure DataForSEO credentials: Create a DataForSEO account at https://app.dataforseo.com/api-access (they offer a free $1 credit for testing) Add a new "Basic Auth" credential in n8n following the HTTP Request Authentication guide Assign this credential to the "Create Task", "Check Task Status", "Get Raw Audit Data", and "Get Source URLs Data" nodes Configure Google Search Console credentials: Add a new "Google OAuth2 API" credential following the Google OAuth guide Ensure your Google account has access to the Google Search Console property you want to analyze Assign this credential to the "Query GSC API" node Update the "Set Fields" node with: dfs_domain: The website domain you want to audit dfs_max_crawl_pages: Maximum number of pages to crawl (default: 1000) dfs_enable_javascript: Whether to enable JavaScript rendering (default: false) company_name: Your company name for the report branding company_website: Your company website URL company_logo_url: URL to your company logo brand_primary_color: Your primary brand color (hex code) brand_secondary_color: Your secondary brand color (hex code) gsc_property_type: Set to "domain" or "url" depending on your Google Search Console property type Run the workflow: Click "Start" and wait for it to complete (approximately 20 minutes for 500 pages). Download the report: Once complete, download the HTML file from the "Download Report" node. How to customize this workflow to your needs This workflow can be adapted in several ways to better suit your specific requirements: Adjust crawl parameters: Modify the "Set Fields" node to change: The maximum number of pages to crawl (dfs_max_crawl_pages). This workflow supports up to 1000 pages. Whether to enable JavaScript rendering for JavaScript-heavy sites (dfs_enable_javascript) Customize issue detection thresholds: In the "Build Report Structure" code node, you can modify: Word count thresholds for thin content detection (currently 1500 words) Click depth thresholds (currently flags pages deeper than 4 clicks) Title and description length parameters (currently 40-60 chars for titles, 70-155 for descriptions) Readability score thresholds (currently flags Flesch-Kincaid scores below 55) Modify the report design: In the "Generate HTML Report" code node, you can: Adjust the HTML/CSS to change the report layout and styling Add or remove sections from the report Change the recommendations logic Modify the health score calculation algorithm Add additional data sources: You could extend the workflow by: Adding Pagespeed Insights data for performance metrics Incorporating backlink data from other APIs Adding keyword ranking data from rank tracking APIs Implement automated delivery: Add nodes after the "Download Report" to: Send the report directly to clients via email Upload it to cloud storage Create a PDF version of the report
by Joseph
📄 Google Script Workflow: Upload File from URL to Google Drive (via n8n) 🔧 Purpose: This lightweight Google Apps Script acts as a server endpoint that receives a file URL (from n8n), downloads the file, uploads it to your specified Google Drive folder, and responds with the file’s metadata (like Drive file ID and URL). This is useful for large video/audio files that n8n cannot handle directly via HTTP Download nodes. 🚀 Setup Steps: 1. Create a New Script Project Go to https://script.google.com Click “New Project” Rename the project to something like: DriveUploader 2. Paste the Script Code Replace the default Code.gs content with the following (your custom script): function doPost(e) { const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here try { const data = JSON.parse(e.postData.contents); // 🔒 Check for correct secret key if (!data.secret || data.secret !== SECRET_KEY) { return ContentService.createTextOutput("Unauthorized") .setMimeType(ContentService.MimeType.TEXT); } const videoUrl = data.videoUrl; const folderId = 'YOUR_FOLDER_ID_HERE'; // Replace with your target folder ID const folder = DriveApp.getFolderById(folderId); const response = UrlFetchApp.fetch(videoUrl); const blob = response.getBlob(); const file = folder.createFile(blob); file.setName('uploaded_video.mp4'); // You can customize the name return ContentService.createTextOutput(file.getUrl()) .setMimeType(ContentService.MimeType.TEXT); } catch (err) { return ContentService.createTextOutput("Error: " + err.message) .setMimeType(ContentService.MimeType.TEXT); } } 3. Generate & Set Up Secret Key To allow authorized post requests to your script only, we need to generate a secret key from aany reliable key generator. You can head over to acte, click generate and copy the "Encryption key 256". Paste it in the 'your-strong-secret-here' placeholder in your script then click save const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here; 4. Replace Folder ID in Code Open the target Drive folder in your browser The folder ID is the part of the URL after /folders/ Example: https://drive.google.com/drive/u/0/folders/1Xabc12345678defGHIJklmn Paste that ID in the script: var folderId = "1Xabc12345678defGHIJklmn"; 5. Set Up Deployment as Web App Click “Deploy” > “Manage Deployments” > “New Deployment” Under Select type, choose Web app Description: Upload from URL to Drive Execute as: Me Who has access: Anyone Click Deploy Authorize the script when prompted Copy the Web App URL 📤 How to Use in n8n 1. HTTP Request Node Method: POST URL: (your web app URL) Secret Key: (Secret Key set in script) Body Content Type: JSON Paste code: { "videoUrl": "https://example.com/path/to/your.mp4", "secret": "your-strong-secret-here" } videoUrl: The file download URL secret: The generated and set up secret key 2. Rename Node A simple drive update node to rename the file using the file drive url returned from the script.
by Ranjan Dailata
Who this is for? The Brand Content Extract, Summarization & Sentiment Analysis workflow is designed for professionals and teams who need to monitor, understand, and act on public brand perception at scale. It is ideal for: Brand Managers - Looking to track how their brand is portrayed online. Marketing Analysts - Seeking insights from competitor and industry content. PR & Communications Teams - Evaluating media tone and potential reputation risks. Data Scientists & AI Developers - Automating content intelligence pipelines. Growth Hackers - Performing large-scale web listening for campaign optimization. What problem is this workflow solving? Manually tracking and interpreting how your brand is mentioned across blogs, news sites, or product reviews is labor-intensive and unscalable. Traditional scraping tools return raw data but lack insights like summarization, sentiment analysis etc. This workflow addresses: Scalable extraction of brand-related content using Bright Data's infrastructure. Textual data extract for easy decision-making or alerting. Automated summarization of verbose or multi-paragraph articles using Gemini. Sentiment analysis of how a brand is being portrayed. What this workflow does Receives input: A brand URL for the data extraction and analysis. Uses Bright Data's Web Unlocker to extract content from relevant sites. Cleans and preprocesses the scraped content for readability. Sends the content to Google Gemini for: Enriched results including: Cleaned content Summary Sentiment Analysis Sends the response to a target system via Webhook notification Perists the response to disk Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Update Source** : Update the workflow input to read from Google Sheet or Airbase for dynamically tracking multiple brands or topics. AI Prompt Customization** : Tailor Gemini prompts for: Summary length (brief vs. detailed) Detailed Sentiment with the custom structured data format. Brand-specific tone detection (e.g., trust, excitement, dissatisfaction) Output Destinations**: Configure the output node to send the responses to various platforms, such as Slack, CRM systems, or databases.