by automedia
Scheduled YouTube Transcription with Duplicate Prevention Who's It For? This template is for advanced users, content teams, and data analysts who need a robust, automated system for capturing YouTube transcripts. Itโs ideal for those who monitor multiple channels and want to ensure they only process and save each video's transcript once. What It Does This is an advanced, "set-it-and-forget-it" workflow that runs on a daily schedule to monitor YouTube channels for new content. It enhances the basic transcription process by connecting to your Supabase database to prevent duplicate entries. The workflow fetches all recent videos from the channels you track, filters out any that are too old, and then checks your database to see if a video's transcript has already been saved. Only brand-new videos are sent for transcription via the youtube-transcript.io API, with the final data (title, URL, full transcript, author) being saved back to your Supabase table. Requirements A Supabase account with a table to store video data. This table must have a column for the source_url to enable duplicate checking. An API key from youtube-transcript.io (offers a free tier). The Channel ID for each YouTube channel you want to track. How to Set Up Set Your Time Filter: In the "Max Days" node, set the number of days you want to look back for new videos (e.g., 7 for the last week). Add Channel IDs: In the "Channels To Track" node, replace the example YouTube Channel IDs with the ones you want to monitor. Configure API Credentials: Select the "Get Transcript from API" node. In the credentials tab, create a new "Header Auth" credential. Name it youtube-transcript-io and paste your API key into the "Value" field. The "Name" field should be x-api-key. Connect Your Supabase Account: This workflow uses Supabase in two places: "Check if URL Is In Database" and "Add to Content Queue Table". You must configure your Supabase credentials in both nodes. In each node, select your target table and ensure the columns are mapped correctly. Adjust the Schedule: The "Schedule Trigger" node is set to run once a day. Click it to adjust the time and frequency to your needs. Activate the Workflow: Save your changes and toggle the workflow to Active.
by Ertay Kaya
Generate responses for Google Play Store reviews using Anthropic Claude, Google Drive and Google Play Store API This workflow empowers app developers and community management teams by automating the generation and posting of responses to user reviews on the Google Play Store. Designed to streamline the engagement process, it drastically reduces the manual workload on community managers by integrating AI-driven responses with necessary human oversight. By leveraging n8n's workflow automation capabilities, this solution eliminates the need for costly third-party platforms like AppFollow or Appbot, making it a cost-effective and efficient alternative. Pre-requisites Google Drive & Google Sheets access: To store and manage review spreadsheets. Google Play Developer Account / Service account: To fetch and respond to app reviews. LLM credentials (e.g., Anthropic): Required for generating responses. Workflow steps 1. Initialise and trigger workflow: The process begins daily at 10 AM through a scheduled trigger. 2. Fetch application data: Utilizes a data table (Google Play apps) to retrieve a list of applications with their bundle_id and name, essential for identifying review sources. 3. Collect Google Play Reviews: Retrieves previous day's reviews from the Google Play Store based on app data. Stores the reviews in Google Sheets for further processing. 4. Generate AI Responses: AI model generates initial responses based on review content. Responses are structured and stored along with reviews within a Google Spreadsheet located in a Google Drive folder called ToReview. 5. Human Review & Modification: Community managers review and refine AI-generated responses. Reviewed spreadsheets are moved to the ToSubmit Google Drive folder by the editor. 6. Post Verified Responses: Workflow triggers again at 5 PM to access reviewed spreadsheets in ToSubmit folder. It posts the human-verified responses back to the respective reviews on the Google Play Store. Logs are maintained, recording each response's success or failure. 7. Archive processed spreadsheets: After posting the responses, workflow moves the processed files from ToSubmit to a different folder called Archived
by Naveen Choudhary
Complete Template Description Automate LinkedIn Sales Navigator contact extraction to Google Sheets This workflow scrapes LinkedIn Sales Navigator search results and automatically saves contact details to Google Sheets with pagination support and rate limiting protection. Who's it for Sales teams, recruiters, and business development professionals who need to extract and organize LinkedIn contact data at scale without manual copy-pasting. What it does The workflow connects to a LinkedIn scraping API to fetch contact information from Sales Navigator search results. It handles pagination automatically, extracts contact details (name, title, company, location, profile URL), and appends them to a Google Sheet. Built-in rate limiting (30-60 second delays) prevents API blocks and mimics natural browsing behavior. Requirements Self-hosted n8n instance** (this workflow will NOT work on n8n Cloud due to cookie requirements and third-party API usage) LinkedIn Sales Navigator account Google Sheets account EditThisCookie browser extension API access from the creator (1 month free trial available) How to set up Step 1: Get API Access Email the creator to request 1 month of free API access using the link in the workflow. You'll receive your API key within 24 hours. Step 2: Configure API Authentication Click the "Scrape LinkedIn Contacts API" node Under Authentication, select "Header Auth" Create new credential with Name: x-api-key and your received API key as the Value Save the credential Step 3: Extract LinkedIn Cookies Install the EditThisCookie extension Navigate to LinkedIn Sales Navigator Click the cookie icon in your browser toolbar Click "Export" and copy the cookie data Paste into the cookies field in the "Set Search Parameters" node Step 4: Configure Your Search In the "Set Search Parameters" node, update: cookies: Your exported LinkedIn cookies url: Your LinkedIn Sales Navigator search URL total_pages: Number of pages to scrape (default: 2, each page = ~25 contacts) Step 5: Set Up Google Sheets Make a copy of the template Google Sheet (or create your own with matching column headers) In the "Save Contacts to Google Sheets" node, connect your Google Sheets account Select your destination spreadsheet and sheet name Important Security Note: Keep your LinkedIn cookies private. Never share them with others or commit them to public repositories. Customization options Adjust total_pages to control how many contacts you scrape Modify the delay in "Rate Limit Delay Between Requests" node (default: 30-60 seconds random) - do not lower this to avoid API blocks Customize which contact fields to save in the Google Sheets column mapping Change the search URL to target different prospect segments or filters
by Lorena
This workflow collects images from web search on a specific query, detects labels in them, and stores this information in a Google Sheet.
by Jan Oberhauser
Triggers worfklow all 15 minutes Reads the data from Google Sheet Converts data to XLS Uploads the file to Dropbox
by n8n Team
This workflow will help guide you through obtaining a spreadsheet file, reading it, making a change then saving it to local or cloud storage.
by Yassin Zehar
Description Automated workflow that creates Jira issues directly from Streamlit form submissions. Receives webhook data, validates and transforms it to Jira's API schema, creates the issue, and returns the ticket details to the frontend application. Context Bridges the gap between lightweight Streamlit prototypes and enterprise Jira workflows. Enables rapid ticket creation while maintaining Jira as the authoritative source of truth. Includes safety mechanisms to prevent duplicate submissions and malformed requests. Target Users Product Managers building internal request portals. Engineering Managers creating demo applications. Teams requiring instant Jira integration without complex UI development. Project Manager using Jira pour mangement and reporting. Organizations wanting controlled ticket creation without exposing Jira directly. Technical Requirements n8n instance (cloud or self-hosted) with webhook capabilities Jira Cloud project with API token and issue creation permissions Streamlit application configured to POST to n8n webhook endpoint Optional: Custom field IDs for Story Points (typically customfield_10016) Workflow Steps Webhook Trigger - Receives POST from Streamlit with ticket payload. Deduplication Guard - Filters out ping requests and rapid duplicate submissions. Data Validation - Ensures required fields are present and properly formatted. Schema Transformation - Maps Streamlit fields to Jira API structure. Jira API Call - Creates issue via REST API with error handling. Response Formation - Returns success status with issue key and URL. Key Features Duplicate submission prevention. Rich text description formatting for Jira. Configurable priority and issue type mapping. Story points integration for agile workflows. Comprehensive error handling and logging. Clean JSON response for frontend feedback. Validation Testing Ping/test requests are ignored without creating issues. First submission creates Jira ticket with proper formatting. Rapid resubmission is blocked to prevent duplicates. All field types (priority, labels, due dates, story points) map correctly. Error responses are handled gracefully. Expected Output Valid Jira issue created in specified project JSON response: {ok: true, jiraKey: "PROJ-123", url: "https://domain.atlassian.net/browse/PROJ-123"} No orphaned or duplicate tickets. Audit trail in n8n execution logs. Implementation Notes Jira Cloud requires accountId for assignee (not username). Date format must be YYYY-MM-DD for due dates. Story Points field ID varies by Jira configuration. Enable response output in HTTP node for debugging. Consider rate limiting for high-volume scenarios. Tutorial video: Watch the Youtube Tutorial video How it works โฐ Trigger: Webhook fires when the app submits. ๐งน Guard: Ignore pings/invalid, deduplicate rapid repeats. ๐งฑ Prepare: Normalize to Jiraโs field model (incl. Atlassian doc description). ๐งพ Create: POST to /rest/api/3/issue and capture the key. ๐ Respond: Send { ok, jiraKey, url } back to Streamlit for instant UI feedback. About me : I'm Yassin, IT Project Manager, Agile & Data specialist. Scaling tech products with data-driven project management. ๐ฌ Feel free to connect with me on Linkedin
by Jan Oberhauser
Download XML data Convert it to JSON Change title in data Convert back to XML Upload file to Dropbox
by Trey
A quick example showing how to get the local date and time into a Function node using moment.js. This relies on the GENERIC_TIMEZONE environment variable being correctly configured (see the docs here) NOTE: In order for this to work, you must whitelist the moment library for use by setting the following environment variable: NODE_FUNCTION_ALLOW_EXTERNAL=moment For convenience, the Function code is as follows: const moment = require('moment'); let date = moment().tz($env['GENERIC_TIMEZONE']); let year = date.year(); let month = date.month(); // zero-indexed! let day = date.date(); let hour = date.hours(); let minute = date.minutes(); let second = date.seconds(); let millisecond = date.millisecond(); let formatted = date.format('YYYY-MM-DD HH:mm:ss.SSS Z'); return [ { json: { utc: date, year: year, month: month, // zero-indexed! day: day, hour: hour, minute: minute, second: second, millisecond: millisecond, formatted: formatted } } ]; `
by Evoort Solutions
Workflow: Auto-Translate WordPress Posts Using AI Translate Pro This n8n workflow automates the translation of WordPress blog content into any language using the AI Translate Pro API, and inserts the translated text into a Google Doc. ๐ Workflow Steps Manual Trigger Initiates the workflow manually (can be replaced with a webhook or schedule trigger). WordPress Node Retrieves a specific blog post (by ID) from your WordPress site using the REST API. HTTP Request Node Sends the blog content to AI Translate Pro via multipart/form-data. Google Docs Node Appends the translated text into a specified Google Document using Google Docs API. ๐ API Usage: AI Translate Pro Endpoint: POST https://ai-translate-pro.p.rapidapi.com/translate.php Content-Type: multipart/form-data Required Parameters: | Field | Type | Description | |-----------|--------|-----------------------------------------| | text | string | The text or HTML content to translate | | language| string | Target language (e.g., Hindi, French) | Headers: | Header Name | Value | |---------------------|---------------------------------------| | x-rapidapi-host | ai-translate-pro.p.rapidapi.com | | x-rapidapi-key | Your RapidAPI key | โ Benefits of Using AI Translate Pro โก Fast AI-Powered Translation โ Instantly translate content with no need for manual input. ๐ Supports Multiple Languages โ Easily switch languages to serve global audiences. ๐ง Context-Aware โ More accurate than basic dictionary translation tools. ๐ Easy Integration with n8n โ No-code/low-code implementation. ๐ Content Reuse โ Save translations directly into Google Docs for future use or edits. ๐ผ Cost-Effective โ Efficient alternative to expensive manual translation services. ๐ ๏ธ Problems Solved โ Manual copy-pasting into Google Translate โ Limited or slow in-house translation โ Difficulty managing multilingual content โ Inconsistent formatting or storage โ With AI Translate Pro, translations are fast, automated, and saved where your team can access them instantly. โ Example Use Case Translate WordPress blog posts from English to Hindi. Store translated content in Google Docs for editing or reuse. Expand to multilingual sites with a simple language switch. Use AI Translate Pro in any low-code or no-code platform like n8n. ๐ Requirements WordPress REST API credentials RapidAPI access to AI Translate Pro Google Docs API service account ๐ More Info Explore full documentation and pricing on the AI Translate Pro RapidAPI listing page. Create your free n8n account and set up the workflow in just a few minutes using the link below: ๐ Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by Kev
Important: This workflow uses the Autype community node and requires a self-hosted n8n instance. This workflow reads every PDF from a Google Drive folder, generates a metadata title page for each document (showing filename, creation date, last modified date, and owner), merges everything into a single PDF with interleaved title pages, adds a blue company-name watermark to every page, and uploads the final result back to Google Drive. Who is this for? Operations teams, project managers, legal departments, and anyone who needs to compile multiple PDFs into a single branded document package. Useful for monthly report bundles, compliance archives, client deliverables, audit documentation, or any scenario where multiple files need to be combined with clear separation between documents. What this workflow does On manual trigger, the workflow lists all PDFs in a specified Google Drive folder. It generates all metadata title pages at once using the Autype Render JSON endpoint (one API call for all cover sheets). Then it loops through each document, downloads it from Drive, uploads it to Autype, extracts the corresponding title page using the outputFileId directly (no re-upload needed), and collects merge pairs. After the loop, all file IDs are interleaved (title page 1, document 1, title page 2, document 2, ...) and merged. The watermark step also uses the outputFileId from Merge directly. The final PDF is saved back to Google Drive. Output structure The final merged PDF has the following page structure: Every page in the merged result carries a blue company-name watermark. How it works Run Workflow โ Manual trigger starts the pipeline. List PDFs in Folder โ A Google Drive node lists all files in the configured folder. Build Title Pages JSON โ A Code node creates an Autype Render JSON config with one page section per document. Each title page shows the document name, creation date, last modified date, owner, and document number. Render Title Pages PDF โ Autype renders all title pages as a single multi-page PDF in one API call (one page per document). Upload Title Pages PDF โ The rendered PDF is uploaded to Autype Document Tools for page extraction in the loop. Prepare Loop Items โ A Code node creates one item per document with the Drive file ID, title page number, and the title pages file ID. Loop Over Documents โ For each document: Download PDF from Drive โ Downloads the file binary from Google Drive. Upload Document to Autype โ Uploads the document to get a file ID. Extract Title Page โ Uses the Keep/Remove Pages operation to extract the matching page from the title pages PDF (page 1 for doc 1, page 2 for doc 2, etc.). Returns outputFileId directly. Collect Merge Pair โ Stores outputFileId from Extract Title Page and the document file ID as a pair for the final merge. Build Final Merge List โ A Code node interleaves all pairs into one comma-separated file ID string. Merge All PDFs โ Combines all files into a single PDF in order. Returns outputFileId. Add Company Watermark โ Uses outputFileId from Merge directly (no re-upload). Stamps the company name in blue (#2563EB) on every page at 60% opacity. Save Final PDF to Drive โ Uploads the watermarked result as merged-documents-YYYY-MM-DD.pdf to the configured Google Drive folder. Setup Install the Autype community node (n8n-nodes-autype) via Settings โ Community Nodes. Create an Autype API credential with your API key from app.autype.com. See API Keys in Settings. Create a Google Drive OAuth2 credential and connect your Google account. Import this workflow and select your Autype credential in all Autype nodes and your Google Drive credential in all Google Drive nodes. Set YOUR_FOLDER_ID in the "List PDFs in Folder" node to the ID from your folder URL (https://drive.google.com/drive/folders/YOUR_FOLDER_ID). Set YOUR_FOLDER_ID in the "Save Final PDF to Drive" node to the same folder ID (or a different output folder). Change the watermark text in the "Add Company Watermark" node to your company name. Click Test Workflow to run the pipeline. Note: This is a community node. You need a self-hosted n8n instance to use community nodes. Requirements Self-hosted n8n instance (community nodes are not available on n8n Cloud) Autype account with API key (free tier available) n8n-nodes-autype community node installed Google Drive account with OAuth2 credentials How to customize Change title page content:** Edit the "Build Title Pages JSON" Code node to add or remove metadata fields, change font sizes, or adjust colors. Change watermark text:** Replace "Your Company Name" in the "Add Company Watermark" node with your actual company name, "CONFIDENTIAL", "DRAFT", or any other label. Adjust watermark style:** Change color, fontSize, opacity, or rotation in the watermark options. Set pages to a range (e.g. "2-") to skip title pages. Use a different output folder:** Change the folder ID in the "Save Final PDF to Drive" node to upload to a separate location. Add compression:** Insert a Compress operation between the watermark and the Drive upload to reduce file size. Add password protection:** Insert a Protect operation after watermarking to encrypt the final PDF (see the A06 workflow for an example). Control document order:** Add sorting logic in the "Build Title Pages JSON" Code node to define the order in the merged PDF.
by darkesthour111
I used this to check for a page that had Out Of Stock not found when an item came back in stock. Set the URL for the HTTP Request node and your Webhook URL and Messages for the discord nodes.