by Growth AI
French Public Procurement Tender Monitoring Workflow Overview This n8n workflow automates the monitoring and filtering of French public procurement tenders (BOAMP - Bulletin Officiel des Annonces des Marchés Publics). It retrieves tenders based on your preferences, filters them by market type, and identifies relevant opportunities using keyword matching. Who is this for? Companies seeking French public procurement opportunities Consultants monitoring specific market sectors Organizations tracking government contracts in France What it does The workflow operates in two main phases: Phase 1: Automated Tender Collection Retrieves all tenders from the BOAMP API based on your configuration Filters by market type (Works, Services, Supplies) Stores complete tender data in Google Sheets Handles pagination automatically for large datasets Phase 2: Intelligent Keyword Filtering Downloads and extracts text from tender PDF documents Searches for your specified keywords within tender content Saves matching tenders to a separate "Target" sheet for easy review Tracks processing status to avoid duplicates Requirements n8n instance (self-hosted or cloud) Google account with Google Sheets access Google Sheets API credentials configured in n8n Setup Instructions Step 1: Duplicate the Configuration Spreadsheet Access the template spreadsheet: Configuration Template Click File → Make a copy Save to your Google Drive Note the URL of your new spreadsheet Step 2: Configure Your Preferences Open your copied spreadsheet and configure the Config tab: Market Types - Check the categories you want to monitor: Travaux (Works/Construction) Services Fournitures (Supplies) Search Period - Enter the number of days to look back (e.g., "30" for the last 30 days) Keywords - Enter your search terms as a comma-separated list (e.g., "informatique, cloud, cybersécurité") Step 3: Import the Workflow Copy the workflow JSON from this template In n8n, click Workflows → Import from File/URL Paste the JSON and import Step 4: Update Google Sheets Connections Replace all Google Sheets node URLs with your spreadsheet URL: Nodes to update: Get config (2 instances) Get keyword Get Offset Get All Append row in sheet Update offset Reset Offset Ok Target offre For each node: Open the node settings Update the Document ID field with your spreadsheet URL Verify the Sheet Name matches your spreadsheet tabs Step 5: Configure Schedule Triggers The workflow has two schedule triggers: Schedule Trigger1 (Phase 1 - Tender Collection) Default: 0 8 1 * * (1st day of month at 8:00 AM) Adjust based on how frequently you want to collect tenders Schedule Trigger (Phase 2 - Keyword Filtering) Default: 0 10 1 * * (1st day of month at 10:00 AM) Should run after Phase 1 completes To modify: Open the Schedule Trigger node Click Cron Expression Adjust timing as needed Step 6: Test the Workflow Manually execute Phase 1 by clicking the Schedule Trigger1 node and selecting Execute Node Verify tenders appear in your "All" sheet Execute Phase 2 by triggering the Schedule Trigger node Check the "Target" sheet for matching tenders How the Workflow Works Phase 1: Tender Collection Process Configuration Loading - Reads your preferences from Google Sheets Offset Management - Tracks pagination position for API calls API Request - Fetches up to 100 tenders per batch from BOAMP Market Type Filtering - Keeps only selected market categories Data Storage - Formats and saves tenders to the "All" sheet Pagination Loop - Continues until all tenders are retrieved Offset Reset - Prepares for next execution Phase 2: Keyword Matching Process Keyword Loading - Retrieves search terms from configuration Tender Retrieval - Gets unprocessed tenders from "All" sheet Sequential Processing - Loops through each tender individually PDF Extraction - Downloads and extracts text from tender documents Keyword Analysis - Searches for matches with accent/case normalization Status Update - Marks tender as processed Match Evaluation - Determines if keywords were found Target Storage - Saves relevant tenders with match details Customization Options Adjust API Parameters In the HTTP Request node, you can modify: limit: Number of records per batch (default: 100) Additional filters in the where parameter Modify Keyword Matching Logic Edit the Get query node to adjust: Text normalization (accent removal, case sensitivity) Match proximity requirements Context length around matches Change Data Format Update the Format Results node to modify: Date formatting PDF URL generation Field mappings Spreadsheet Structure Your Google Sheets should contain these tabs: Config** - Your configuration settings Offset** - Pagination tracking (managed automatically) All** - Complete tender database Target** - Filtered tenders matching your keywords Troubleshooting No tenders appearing in "All" sheet: Verify your configuration period isn't too restrictive Check that at least one market type is selected Ensure API is accessible (test the HTTP Request node) PDF extraction errors: Some PDFs may be malformed or protected Check the URL generation in Format Results node Verify PDF URLs are accessible in a browser Duplicate tenders in Target sheet: Ensure the "Ok" status is being written correctly Check the Filter node is excluding processed tenders Verify row_number matching in update operations Keywords not matching: Keywords are case-insensitive and accent-insensitive Verify your keywords are spelled correctly Check the extracted text contains your terms Performance Considerations Phase 1 processes 100 tenders per iteration with a 10-second wait between batches Phase 2 processes tenders sequentially to avoid overloading PDF extraction Large datasets (1000+ tenders) may take significant time to process Consider running Phase 1 less frequently if tender volume is manageable Data Privacy All data is stored in your Google Sheets No external databases or third-party storage BOAMP API is publicly accessible (no authentication required) Ensure your Google Sheets permissions are properly configured Support and Updates This workflow retrieves data from the BOAMP public API. If API structure changes, nodes may require updates. Monitor the workflow execution logs for errors and adjust accordingly.
by gotoHuman
Collaborate with an AI Agent on a joint document, e.g. for creating your content marketing strategy, a sales plan, project status updates, or market analysis. The AI Agent generates markdown text that you can review and edit it in gotoHuman, and only then is the existing Google Doc updated. In this example we use AI to update our company's content strategy for the next quarter. How It Works The AI Agent has access to other documents that provide enough context to write the content strategy. We ask it to generate the text in markdown format. To ensure our strategy document is not changed without our approval, we request a human review using gotoHuman. There the markdown content can be edited and properly previewed. Our workflow resumes once the review is completed. We check if the content was approved and then write the (potentially edited) markdown to our Google Docs file via the Google Drive node. How to set up Most importantly, install the verified gotoHuman node before importing this template! (Just add the node to a blank canvas before importing. Works with n8n cloud and self-hosted) Set up your credentials for gotoHuman, OpenAI, and Google Docs/Drive In gotoHuman, select and create the pre-built review template "Strategy agent" or import the ID: F4sbcPEpyhNKBKbG9C1d Select this template in the gotoHuman node Requirements You need accounts for gotoHuman (human supervision) OpenAI (Doc writing) Google Docs/Drive How to customize Let the workflow run on a schedule, or create and connect a manual trigger in gotoHuman that lets you capture additional human input to feed your agent Provide the agent with more context to write the content strategy Use the gotoHuman response (or a Google Drive file change trigger) to run additional AI agents that can execute on the new strategy
by Atta
This workflow automatically turns any YouTube video into a structured blog post with Gemini AI. By sending a simple POST request with a YouTube URL to a webhook, it downloads the video’s audio, transcribes the content, and generates a blog-ready article with a title, description, tags, and category. The final result, along with the full transcript and original video URL, is delivered to your chosen webhook or CMS. How it works: The workflow handles the entire process of transforming YouTube videos into complete blog posts using Gemini AI transcription and structured text generation. Once triggered, it: Downloads the video’s audio Transcribes the spoken content into text Generates a blog post in the same language as the video’s original language Creates: A clear and engaging title A short description Suggested category and tags The full transcript of the video The original YouTube video URL This makes it easy to repurpose video content into publish-ready articles in minutes. This template is ideal for content creators, marketers, educators, and bloggers who want to quickly turn video content into written posts without manual transcription or editing. Setup Instructions Install yt-dlp on your local machine or server where n8n runs. This is required to download YouTube audio. Get a Google Gemini API key and configure it in your AI nodes. Webhook Input Configuration: Endpoint: The workflow starts with a Webhook Trigger. Method: POST Example Request Body: { "videoUrl": "https://www.youtube.com/watch?v=lW5xEm7iSXk" } Configure Output Webhook: Add your target endpoint in the last node where the blog post JSON is sent. This could be your CMS, a Notion database, or another integration. Customization Guidance Writing Style:** Update the AI Agent’s prompt to adjust tone (e.g., casual, professional, SEO-optimized). Metadata:** Modify how categories and tags are generated to fit your website’s taxonomy. Integration:** Swap the final webhook with WordPress, Ghost, Notion, or Slack to fit your publishing workflow. Transcript Handling:** Save the full transcript separately if you also want searchable video archives.
by Salman Mehboob
Stop manually downloading CSVs or risking account bans with sketchy UI scrapers! This workflow bridges the gap between Apollo.io's powerful search filters and your Google Sheets database using Apollo's official REST API. Perfect for Sales Development Reps (SDRs), Growth Marketers, and B2B Founders, this plug-and-play workflow turns a standard Apollo search URL into an automated lead-generation machine. 🚀 What This Workflow Does Smart URL Conversion: You simply paste a standard Apollo.io search URL into the workflow. A custom Code Node automatically parses all your UI filters (job titles, locations, employee count, tags) and converts them into a clean JSON payload for the Apollo API. Official API Scraping: Uses the /v1/mixed_people/search endpoint to fetch the targeted contacts safely and reliably. Smart Pagination & Rate Limiting: Automatically loops through the search results page-by-page. It includes a built-in 2-second wait timer to ensure you never hit Apollo's API rate limits. Direct Google Sheets Sync: Extracts the exact data you need (First Name, Last Name, Title, Contact/Company LinkedIn, Location, Company Details, Phone Numbers) and appends it directly to your spreadsheet. 🎯 Who is this for? Sales Teams:** Build targeted lead lists automatically while you sleep. Agencies:** Scrape and deliver customized lead lists directly into shared client spreadsheets. Growth Hackers:** Bypass the standard CSV export limits by leveraging API pagination. ⚙️ How It Works (Node Breakdown) Set Node (Input):** This is where you paste your Apollo search URL and define which page to start scraping from. Code Node (Filter Builder):** A production-level JavaScript parser that translates URL query parameters into API-supported arguments, automatically dropping unsupported UI-only parameters. HTTP Request:** Connects to the Apollo API using your personal API key. Set & If Nodes (Pagination Logic):** Calculates current and total pages, controlling the loop to keep fetching leads until the final page is reached. Google Sheets Node:** Maps the structured JSON output into clean rows and columns. 🛠️ Step-by-Step Setup Guide 1. Get Your Credentials Apollo API Key:* Log into your Apollo.io account, navigate to *Settings > Integrations > API**, and generate a new API Key. Google Sheets:** Ensure you have an active Google Sheets credential set up in your n8n workspace. 2. Configure the Workflow Open the Fetch Leads (Apollo) HTTP Request node. Under Headers, replace the default APOLLO_API_KEY value with your actual API key. Open the Write to Google Sheets node, select your Google Sheets credential, and point it to your specific Document and Sheet. (Make sure your sheet has headers like First Name, Last Name, Title, contact Linkedin, etc., to match the workflow mapping). Open the Apollo Search Input node and paste your target Apollo Search URL into the url field. You can also specify the start_page (default is 1). 3. Run & Automate! Click Execute Workflow. The workflow will fetch the first page, write the leads to your sheet, pause for 2 seconds to respect rate limits, and loop until all leads are extracted. (Note: To restrict how many pages the workflow scrapes, simply edit the total_pages assignment in the "Extract Pagination Info" node to a fixed number!) 📬 Contact Information For Help and queries, contact LinkedIn:** Salman Mehboob Email:**salmanmehboob1947@gmail.com
by Khairul Muhtadin
The Prompt converter workflow tackles the challenge of turning your natural language video ideas into perfectly formatted JSON prompts tailored for Veo 3 video generation. By leveraging Langchain AI nodes and Google Gemini, this workflow automates and refines your input to help you create high-quality videos faster and with more precision—think of it as your personal video prompt translator that speaks fluent cinematic! 💡 Why Use Prompt Converter? Save time: Automate converting complex video prompts into structured JSON, cutting manual formatting headaches and boosting productivity. Avoid guesswork: Eliminate unclear video prompt details by generating detailed, cinematic descriptions that align perfectly with Veo 3 specs. Improve output quality: Optimize every parameter for Veo 3's video generation model to get realistic and stunning results every time. Gain a creative edge: Turn vague ideas into vivid video concepts with AI-powered enhancement—your video project's secret weapon. ⚡ Perfect For Video creators: Content developers wanting quick, precise video prompt formatting without coding hassles. AI enthusiasts: Developers and hobbyists exploring Langchain and Google Gemini for media generation. Marketing teams: Professionals creating video ads or visuals who need consistent prompt structuring that saves time. 🔧 How It Works ⏱ Trigger: User submits a free text prompt via message or webhook. 📎 Process: The text goes through an AI model that understands and reworks it into detailed JSON parameters tailored for Veo 3. 🤖 Smart Logic: Langchain nodes parse and optimize the prompt with cinematic details, set reasonable defaults, and structure the data precisely. 💌 Output: The refined JSON prompt is sent to Google Gemini for video generation with optimized settings. 🔐 Quick Setup Import the JSON file to your n8n instances Add credentials: Azure OpenAI, Gemini API, OpenRouter API Customize: Adjust prompt templates or default parameters in the Prompt converter node Test: Run your workflow with sample text prompts to see videos come to life 🧩 You'll Need Active n8n instances Azure OpenAI API Gemini API Key OpenRouter API (alternative AI option) 🛠️ Level Up Ideas Add integration with video hosting platforms to auto-upload generated videos 🧠 Nodes Used Prompt Input** (Chat Trigger) OpenAI** (Azure OpenAI GPT model) Alternative** (OpenRouter API) Prompt converter** (Langchain chain LLM for JSON conversion) JSON parser** (structured output extraction) Generate a video** (Google Gemini video generation) Made by: Khaisa Studio Tags: video generation, AI, Langchain, automation, Google Gemini Category: Video Production Need custom work? Contact me
by WeblineIndia
Fill iOS localization gaps from .strings → Google Sheets and PR with placeholders (GitHub) This n8n workflow automatically identifies missing translations in .strings files across iOS localizations (e.g., Base.lproj vs fr.lproj) and generates a report in Google Sheets. Optionally, it creates a GitHub PR to insert placeholder strings ("TODO_TRANSLATE") so builds don't fail. Supports DRY\_RUN mode. Who’s it for iOS teams who want fast feedback on missing translations. Localization managers who want a shared sheet to assign work to translators. How it works A GitHub Webhook triggers on push or pull request. The iOS repo is scanned for .strings files under Base.lproj or en.lproj and their target-language counterparts. It compares keys and identifies what’s missing. A new or existing Google Sheet tab (e.g., fr) is updated with missing entries. If enabled, it creates a GitHub PR with placeholder keys (e.g., "TODO_TRANSLATE"). How to set up Import the Workflow JSON into your n8n instance. Set Config Node values like: { "GITHUB_OWNER": "your-github-user-name", "GITHUB_REPO": "your-iOS-repo-name", "BASE_BRANCH": "develop", "SHEET_ID": "<YOUR_GOOGLE_SHEET_ID>", "ENABLE_PR": "true", "IOS_SOURCE_GLOB": "/Base.lproj/*.strings,/en.lproj/*.strings", "IOS_TARGET_GLOB": "*/.lproj/*.strings", "PLACEHOLDER_VALUE": "TODO_TRANSLATE", "BRANCH_TEMPLATE": "chore/l10n-gap-{{YYYYMMDD}}", } Create GitHub Webhook URL: https://your-n8n-instance/webhook/l10n-gap-ios Content-Type: application/json Trigger on: Push, Pull Request Connect credentials GitHub token with repo scope Google Sheets API (Optional) Slack OAuth + SMTP Requirements | Tool | Needed For | Notes | | ---------------- | -------------------- | ---------------------------------------- | | GitHub Repo | Webhook, API for PRs | repo token or App | | Google Sheets | Sheet output | Needs valid SHEET_ID or create-per-run | | Slack (optional) | Notifications | chat:write scope | | SMTP (optional) | Email fallback | Standard SMTP creds | How to customize Multiple Locales**: Add comma-separated values to TARGET_LANGS_CSV (e.g., fr,de,es). Globs**: Adjust IOS_SOURCE_GLOB and IOS_TARGET_GLOB to scan only certain modules or file patterns. Ignore Rules**: Add IGNORE_KEY_PREFIXES_CSV to skip certain internal/debug strings. Placeholder Value**: Change PLACEHOLDER_VALUE to something meaningful like "@@@". Slack/Email**: Set SLACK_CHANNEL and EMAIL_FALLBACK_TO_CSV appropriately. DRY\_RUN**: Set to true to skip GitHub PR creation but still update the sheet. Add‑ons Android support:** Add a second path for strings.xml (values → values-<lang>), same diff → Sheets → placeholder PR. Multiple languages at once:** Expand TARGET_LANGS_CSV and loop tabs + placeholder commits per locale. .stringsdict handling:** Validate plural/format entries and open a precise PR. Translator DMs:** Provide a LANG → Slack handle/email map to DM translators with their specific file/key counts. GitLab/Bitbucket variants:** Replace GitHub API calls with GitLab/Bitbucket equivalents to open Merge Requests. Use Case Examples Before a test build, ensure fr has all keys present—placeholders keep the app compiling. Weekly run creates a single sheet for translators and a PR with placeholders, avoiding last‑minute breakages. A new screen adds 12 strings; the bot flags and pre‑fills them across locales. Common troubleshooting | Issue | Possible Cause | Solution | | ------------------------ | --------------------------------------------- | ------------------------------------------------------ | | No source files found | Glob doesn't match Base.lproj or en.lproj | Adjust IOS_SOURCE_GLOB | | Target file missing | fr.lproj doesn’t exist yet | Will be created in placeholder PR | | Parsing skips entries | Non-standard string format in file | Ensure proper .strings format "key" = "value"; | | Sheet not updating | SHEET_ID missing or insufficient permission | Add valid ID or allow write access | | PR not created | ENABLE_PR=false or no missing keys | Enable PR and ensure at least one key is missing | | Slack/Email not received | Missing credentials or config | Configure Slack/SMTP properly and set recipient fields | Need Help? Want to expand this for Android? Loop through 5+ locales at once? Or replace GitHub with GitLab? Contact our n8n Team at WeblineIndia with your repo & locale setup and we’ll help tailor it to your translation workflow!
by Anirudh Aeran
This workflow provides a complete backend solution for building your own WhatsApp marketing dashboard. It enables you to send dynamic, personalized, and rich-media broadcast messages to an entire contact list stored in Google Sheets. The system is built on three core functions: automatically syncing your approved Meta templates, providing an API endpoint for your front-end to fetch those templates, and a powerful broadcast engine that merges your contact data with the selected template for mass delivery. Who’s it for? This template is for marketers, developers, and businesses who want to run sophisticated WhatsApp campaigns without being limited by off-the-shelf tools. It's perfect for anyone who needs to send personalized bulk messages with dynamic content (like unique images or links for each user) and wants to operate from a simple, custom-built web interface. How it works This workflow is composed of three independent, powerful parts: Automated Template Sync: A scheduled trigger runs periodically to fetch all of your approved message templates directly from your Meta Business Account. It then clears and updates an n8n Data Table, ensuring your list of available templates is always perfectly in sync with Meta. Front-end API Endpoint: A dedicated webhook acts as an API for your dashboard. When your front-end calls this endpoint, it returns a clean JSON list of all available templates from the n8n Data Table, which you can use to populate a dropdown menu for the user. Dynamic Broadcast Engine: The main webhook listens for a request from your front-end, which includes the name of the template to send. It then: Looks up the template's structure in the Data Table. Fetches all contacts from your Google Sheet. For each contact, a Code node dynamically constructs a personalized API request. It can merge the contact's name into the body, add a unique user ID to a button's URL, and even pull a specific image URL from your Google Sheet to use as a dynamic header. Sends the fully personalized message to the contact. How to set up Pre-requisite - Front-end: This workflow is a backend and is designed to be triggered by a front-end application. You will need a simple UI with a dropdown to select a template and a button to trigger the broadcast. Meta for Developers: You need a Meta App with the WhatsApp Business API configured. From your app, you will need your WhatsApp Business Account ID, a Phone Number ID, and a permanent System User Access Token. n8n Data Table: Create an n8n Data Table (e.g., named "WhatsApp Templates") with the following columns: template_name, language_code, components_structure, template_id, status, category. Google Sheet: Create a Google Sheet to store your contacts. It must have columns like Phone Number, Full Name, and for dynamic images, Marketing Image URL. Configure Credentials: -> Create an HTTP Header Auth credential in n8n for WhatsApp. Use Authorization as the Header Name and Bearer YOUR_PERMANENT_TOKEN as the value. -> Add your Google Sheets credentials. Configure Nodes: -> In both HTTP Request nodes, select your WhatsApp Header Auth credential. Update the URLs with your own Phone Number ID and WABA ID. -> In the Google Sheets node, select your credential and enter the Sheet ID. -> In all Data Table nodes, select the Data Table you created. First Run: Manually execute the "Sync Meta Templates" flow (starting with the Schedule Trigger) once to populate your Data Table with your templates. Activate: Activate all parts of the workflow. Requirements A Meta for Developers account with a configured WhatsApp Business App. A permanent System User Access Token for the WhatsApp Business API. A Google Sheets account. A front-end application/dashboard to trigger the workflow.
by Daniel
Harness OpenAI's Sora 2 for instant video creation from text or images using fal.ai's API—powered by GPT-5 for refined prompts that ensure cinematic quality. This template processes form submissions, intelligently routes to text-to-video (with mandatory prompt enhancement) or image-to-video modes, and polls for completion before redirecting to your generated clip. 📋 What This Template Does Users submit prompts, aspect ratios (9:16 or 16:9), models (sora-2 or pro), durations (4s, 8s, or 12s), and optional images via a web form. For text-to-video, GPT-5 automatically refines the prompt for optimal Sora 2 results; image mode uses the raw input. It calls one of four fal.ai endpoints (text-to-video, text-to-video/pro, image-to-video, image-to-video/pro), then loops every 60s to check status until the video is ready. Handles dual modes: Text (with GPT-5 enhancement) or image-seeded generation Supports pro upgrades for higher fidelity and longer clips Auto-uploads images to a temp host and polls asynchronously for hands-free results Redirects directly to the final video URL on completion 🔧 Prerequisites n8n instance with HTTP Request and LangChain nodes enabled fal.ai account for Sora 2 API access OpenAI account for GPT-5 prompt refinement 🔑 Required Credentials fal.ai API Setup Sign up at fal.ai and navigate to Dashboard → API Keys Generate a new key with "sora-2" permissions (full access recommended) In n8n, create "Header Auth" credential: Name it "fal.ai", set Header Name to "Authorization", Value to "Key [Your API Key]" OpenAI API Setup Log in at platform.openai.com → API Keys (top-right profile menu) Click "Create new secret key" and copy it (store securely) In n8n, add "OpenAI API" credential: Paste key, select GPT-5 model in the LLM node ⚙️ Configuration Steps Import the workflow JSON into your n8n instance via Settings → Import from File Assign fal.ai and OpenAI credentials to the relevant HTTP Request and LLM nodes Activate the workflow—the form URL auto-generates in the trigger node Test by submitting a sample prompt (e.g., "A cat chasing a laser"); monitor executions for video output Adjust polling wait (60s node) for longer generations if needed 🎯 Use Cases Social Media Teams**: Generate 9:16 vertical Reels from text ideas, like quick product animations enhanced by GPT-5 for professional polish Content Marketers**: Animate uploaded images into 8s promo clips, e.g., turning a static ad graphic into a dynamic story for email campaigns Educators and Trainers**: Create 4s explainer videos from outlines, such as historical reenactments, using pro mode for detailed visuals App Developers**: Embed as a backend service to process user prompts into Sora 2 videos on-demand for creative tools ⚠️ Troubleshooting API quota exceeded**: Check fal.ai dashboard for usage limits; upgrade to pro tier or extend polling waits Prompt refinement fails**: Ensure GPT-5 credential is set and output matches JSON schema—test LLM node independently Image upload errors**: Confirm file is JPG/PNG under 10MB; verify tmpfiles.org endpoint with a manual curl test Endless polling loop**: Add an IF node after 10 checks to timeout; increase wait to 120s for 12s pro generations
by Lucio
Automatically upload your Instagram videos to YouTube with configurable time gaps between each upload, using n8n Tables for deduplication. How it works Fetches recent Instagram posts via the Meta Graph API and filters to only video content (VIDEO/REELS) Checks each video against an n8n Table to skip already-uploaded content Waits a configurable delay between uploads to space out your publishing schedule Processes metadata - extracts title from caption, converts hashtags to YouTube tags Uploads to YouTube with your configured privacy, category, and safety settings Records the upload in the n8n Table to prevent duplicates on future runs Set up steps Time estimate: 10-15 minutes Create an n8n Table with two text fields: postId and youtubeId Connect your Instagram credentials (Meta Developer Bearer Token) Connect your YouTube OAuth2 account Edit the Configuration node to set your preferred upload delay, privacy status, and category Activate the workflow Detailed setup instructions and configuration options are documented in the sticky notes inside the workflow. Required n8n Table | Field | Type | Purpose | |-------|------|---------| | postId | String | Stores the Instagram post ID to prevent re-uploading | | youtubeId | String | Stores the resulting YouTube video ID for reference | How to create: Go to n8n Tables in your n8n instance Create a new table named "Instagram To YouTube" Add two columns: postId (text) and youtubeId (text) Select this table in both the "Check If Already Uploaded" and "Save Upload Record" nodes Configuration Options Edit the Configuration node to customize: { "includeSourceLink": true, // Include Instagram link in description "waitTimeoutSeconds": 900, // Delay between uploads (900 = 15 min) "maxTitleLength": 100, // Maximum YouTube title length "categoryId": "24", // YouTube category (24 = Entertainment) "privacyStatus": "public", // public, private, or unlisted "notifySubscribers": false, // Send notifications to subscribers "defaultLanguage": "en", // Video language code "ageRestricted": false // Mark as 18+ content } Key Settings Explained | Setting | Default | Description | |---------|---------|-------------| | includeSourceLink | true | Set to false if your YouTube account can't add external links (unverified accounts) | | waitTimeoutSeconds | 900 | Gap between uploads in seconds. 900 = 15 minutes, 3600 = 1 hour | | ageRestricted | false | Set to true if your content is for mature audiences (18+) | | notifySubscribers | false | Set to true to notify subscribers on each upload | Requirements n8n version**: 1.0+ Instagram**: Meta Developer account with Graph API access and Bearer Token YouTube**: Google Cloud project with YouTube Data API v3 enabled and OAuth2 credentials Features Filters to VIDEO and REELS only (skips images) Smart title extraction from captions Hashtag to YouTube tags conversion Deduplication via n8n Tables COPPA compliance options (madeForKids settings) Configurable upload delays for drip-feeding content Category IDs Reference | ID | Category | |----|----------| | 1 | Film & Animation | | 10 | Music | | 17 | Sports | | 20 | Gaming | | 22 | People & Blogs | | 23 | Comedy | | 24 | Entertainment | | 25 | News & Politics | | 27 | Education | | 28 | Science & Technology |
by Grigory Frolov
📊 YouTube Personal Channel Videos → Google Sheets Automatically sync your YouTube videos (title, description, tags, publish date, captions, etc.) into Google Sheets — perfect for creators and marketers who want a clean content database for analysis or reporting. 🚀 What this workflow does ✅ Connects to your personal YouTube channel via Google OAuth 🔁 Fetches all uploaded videos automatically (with pagination) 🏷 Extracts metadata: title, description, tags, privacy status, upload status, thumbnail, etc. 🧾 Retrieves captions (SRT format) if available 📈 Writes or updates data in your Google Sheets document ⚙️ Can be run manually or scheduled via Cron 🧩 Nodes used Manual Trigger** — to start manually or connect with Cron HTTP Request (YouTube API v3)** — fetches channel, uploads, and captions Code Nodes** — manage pagination and collect IDs SplitOut** — iterates through video lists Google Sheets (appendOrUpdate)** — stores data neatly If Conditions** — control data flow and prevent empty responses ⚙️ Setup guide Connect your Google Account Used for both YouTube API and Google Sheets. Make sure the credentials are set up in Google OAuth2 API and Google Sheets OAuth2 API nodes. Create a Google Sheet Add a tab named Videos. Add these columns: youtube_id | title | description | tags | privacyStatus | uploadStatus | thumbnail | captions You can also include categoryId, maxres, or published if you’d like. Replace the sample Sheet ID In each Google Sheets node, open the “Spreadsheet” field and choose your own document. Make sure the sheet name matches the tab name (Videos). Run the workflow Execute it manually first to pull your latest uploads. Optionally add a Cron Trigger node for daily sync (e.g., once per day). Check your Sheet Your data should appear instantly — with each video’s metadata and captions (if available). 🧠 Notes & tips ⚙️ The flow loops through all pages of your upload playlist automatically — no manual pagination needed. 🕒 The workflow uses YouTube’s “contentDetails.relatedPlaylists.uploads” to ensure you only fetch your own uploads. 💡 Captions fetch may fail for private videos — use “Continue on Fail” if you want the rest to continue. 🧮 Ideal for dashboards, reporting sheets, SEO analysis, or automation triggers. 💾 To improve speed, you can disable the “Captions” branch if you only need metadata. 👥 Ideal for 🎬 YouTube creators maintaining a video database 📊 Marketing teams tracking SEO performance 🧠 Digital professionals building analytics dashboards ⚙️ Automation experts using YouTube data in other workflows 💛 Credits Created by Grigory Frolov YouTube: @gregfrolovpersonal More workflows and guides → ozwebexpert.com/n8n
by Incrementors
Description Connect Fireflies to this workflow once and every meeting you record automatically generates a complete talk-time breakdown delivered to your Telegram. The moment transcription finishes, the workflow fetches per-speaker analytics from Fireflies, builds a visual bar chart showing exactly how much each person spoke, and checks whether anyone crossed your dominance threshold. You receive a formatted report on your phone within seconds of the meeting ending — no AI cost, no spreadsheets, no manual review. Built for managers, coaches, and team leads who want to know if their meetings are balanced or dominated by one voice. What This Workflow Does Triggers instantly when a meeting ends** — Fireflies fires the workflow the moment transcription completes, so the report arrives on your phone without any manual step Fetches per-speaker analytics from Fireflies** — Retrieves talk time percentage, word count, speaking pace, questions asked, longest single speech, speaking turns, and filler words for each participant Builds a visual ASCII bar chart per speaker** — Converts each speaker's talk percentage into a 10-block bar so you can see the balance at a glance Detects meeting dominance automatically** — Checks if any speaker crossed your configured threshold (default 70%) and labels them as DOMINANT in the report Counts total questions across all speakers** — Surfaces the total number of questions asked in the meeting as a single engagement metric Delivers the full report to Telegram instantly** — Sends a formatted message to your Telegram chat with speaker breakdown, dominance alert, and a direct Fireflies link Setup Requirements Tools Needed n8n instance (self-hosted or cloud) Fireflies.ai account with webhook access and speaker analytics enabled Telegram bot (created via @BotFather) Credentials Required Fireflies API key (pasted into 2. Set — Config Values) Telegram Bot credential (connected in n8n) Estimated Setup Time: 10–15 minutes Step-by-Step Setup Import the workflow — Open n8n → Workflows → Import from JSON → paste the workflow JSON → click Import Activate the workflow and copy the webhook URL — Toggle the workflow to Active → click on node 1. Webhook — Fireflies Transcript Done → copy the Production URL shown Register the webhook in Fireflies — Log in to app.fireflies.ai → Settings → Developer Settings → Webhooks → paste the webhook URL → save Get your Fireflies API key — In Fireflies, go to Settings → Integrations → Fireflies API → copy your API key Get your Telegram Chat ID — Open Telegram → search for @userinfobot → send /start → it replies with your chat ID number Fill in Config Values — Open node 2. Set — Config Values → replace the placeholders: | Field | What to enter | |---|---| | YOUR_FIREFLIES_API_KEY | Your Fireflies API key from step 4 | | YOUR_TELEGRAM_CHAT_ID | Your Telegram chat ID number from step 5 | | dominanceThreshold | Leave as 70 or change to your preferred percentage (e.g. 60 for stricter flagging) | > ⚠️ Do NOT change the meetingId field — it is extracted automatically from the Fireflies webhook payload. Connect Telegram — Open node 5. Telegram — Send Talk-Time Report → click the credential dropdown → add your Telegram Bot API credential (paste the bot token from @BotFather) → save Send /start to your bot — Open Telegram → find your bot → send /start — this is required before the bot can message you for the first time Activate the workflow — Confirm the workflow is Active — Fireflies will now fire it automatically after every recorded meeting How It Works (Step by Step) Step 1 — Webhook: Fireflies Transcript Done This step listens for a signal from Fireflies. Every time a meeting finishes transcribing, Fireflies sends a POST request to this webhook URL containing the meeting ID. No manual trigger is needed — it fires automatically after every recorded call where you are the organizer. Step 2 — Set: Config Values Your Fireflies API key, Telegram chat ID, dominance threshold, and the meeting ID from the webhook are stored here. The meeting ID is extracted automatically from all possible Fireflies payload formats — you never need to enter it manually. Step 3 — HTTP: Fetch Speaker Analytics A request is sent to the Fireflies API using your API key and the meeting ID. It retrieves the per-speaker analytics Fireflies computed for this meeting: each speaker's name, talk time percentage, total duration, word count, words per minute, questions asked, longest single speech in seconds, number of speaking turns, and filler word count. If no transcript is found, the workflow stops with an error. Step 4 — Code: Analyze Speaker Data This is where all the analysis happens. Speakers are sorted from highest to lowest talk time percentage. For each speaker, a 10-block visual bar is built — each filled block (█) represents 10% of talk time and empty blocks (░) fill the rest. So a speaker at 70% talk time shows as ███████░░░ 70%. The dominance threshold from your config is checked — if any speaker's percentage meets or exceeds it, they are flagged as DOMINANT in the report and an alert line is added. Total questions across all speakers are also summed. The complete Telegram message is assembled with the meeting title, date, duration, total questions, the dominance alert or balance confirmation, and the full speaker breakdown. If no speaker analytics are available for the meeting, the workflow stops with an error rather than sending an empty message. Step 5 — Telegram: Send Talk-Time Report The formatted report is sent to your Telegram chat. The message shows the meeting title, date, duration, total questions asked, a dominance alert or balance confirmation, and one block per speaker showing their bar chart, talk percentage, minutes, word count, pace, questions, longest speech, speaking turns, and filler words. A direct Fireflies link is included at the bottom. Key Features ✅ Zero AI cost — This workflow uses only Fireflies built-in speaker analytics — no OpenAI or any other AI API is called, making it completely free to run beyond Fireflies ✅ Visual bar chart per speaker — Each speaker gets a 10-block ASCII bar so you can read the balance distribution without looking at numbers ✅ Configurable dominance threshold — Change one number in Config Values to set your own standard for what counts as dominating a meeting ✅ Seven metrics per speaker — Talk percentage, word count, speaking pace, questions asked, longest speech, speaking turns, and filler words — all from one Fireflies API call ✅ Instant phone delivery — Telegram delivers the report to your phone the moment the meeting transcript is ready — no email, no dashboard login ✅ Dominance detection with name — The alert names the specific speaker who dominated, not just a generic warning, so you know exactly who to address ✅ Total questions surfaced — The combined question count across all speakers gives you a quick read on how engaged and interactive the meeting was ✅ No spreadsheet or dashboard needed — The entire report arrives formatted in a single Telegram message — nothing to open, nothing to export Customisation Options Lower the dominance threshold for stricter flagging — In node 2. Set — Config Values, change dominanceThreshold from 70 to 60 or 55 to flag meetings where one person spoke for more than that percentage — useful for highly collaborative teams where even 60% is too much. Add a Slack alert for dominant meetings — After node 4. Code — Analyze Speaker Data, add an IF check that reads the hasDominance flag — if true, post a Slack message to a #meeting-health channel with the speaker name and percentage so the team lead is notified immediately. Log every meeting to Google Sheets — After node 4. Code — Analyze Speaker Data, add a Google Sheets append step to record the meeting title, date, total speakers, dominant speaker name (if any), and each speaker's talk percentage — building a long-term dataset of meeting balance over time. Send to multiple Telegram chats — In node 2. Set — Config Values, you can only store one telegramChatId. To send to multiple recipients, duplicate node 5. Telegram — Send Talk-Time Report, change the chatId in each copy to a different ID, and connect all copies after step 4. Filter out very short meetings — In node 4. Code — Analyze Speaker Data, add a check at the top: if meetingDurationMin is less than 5, return a simple message saying the meeting was too short to analyze and skip the full report — avoiding noise from brief check-in calls. Troubleshooting Workflow not triggering when a meeting ends: Confirm the workflow is Active — inactive workflows do not receive Fireflies webhooks Log in to app.fireflies.ai → Settings → Developer Settings → Webhooks → confirm the URL is saved and matches the Production URL from node 1. Webhook — Fireflies Transcript Done exactly Fireflies only fires webhooks for meetings where you are the organizer — guest meetings will not trigger it Fireflies API key error or transcript not found: Confirm YOUR_FIREFLIES_API_KEY in node 2. Set — Config Values is replaced with your actual key — not the placeholder text Get your key from fireflies.ai → Settings → Integrations → Fireflies API If speaker analytics return empty, your Fireflies plan may not include speaker diarization — check your plan settings in Fireflies No speaker data in the report: Fireflies speaker analytics require that speaker names were detected during transcription — if all speakers show as "Unknown" or the analytics array is empty, speaker diarization may not have run on this meeting Check in the Fireflies dashboard that the meeting transcript shows individual speaker labels — if it does not, speaker analytics will not be available The workflow throws an error and stops cleanly in this case rather than sending an empty or broken message Telegram message not arriving: Confirm the Telegram Bot credential in node 5. Telegram — Send Talk-Time Report is connected with a valid bot token from @BotFather Confirm YOUR_TELEGRAM_CHAT_ID in node 2. Set — Config Values is your numeric chat ID — get it from @userinfobot in Telegram You must send /start to your bot in Telegram before the first message can be delivered — bots cannot initiate conversations without this step Dominance alert not firing when expected: Check that dominanceThreshold in node 2. Set — Config Values is stored as a string number (e.g. "70") and matches what you expect — the code converts it to an integer automatically Open the execution log of node 4. Code — Analyze Speaker Data and check the hasDominance value and the sorted speaker percentages to confirm which speaker's percentage the threshold was compared against Support Need help setting this up or want a custom version built for your team or agency? 📧 Email: info@incrementors.com 🌐 Website: https://www.incrementors.com/
by Piotr Sikora
Who’s it for This workflow is perfect for content managers, SEO specialists, and website owners who want to easily analyze their WordPress content structure. It automatically fetches posts, categories, and tags from a WordPress site and exports them into a Google Sheet for further review or optimization. What it does This automation connects to the WordPress REST API, collects data about posts, categories, and tags, and maps the category and tag names directly into each post. It then appends all this enriched data to a Google Sheet — providing a quick, clean way to audit your site’s content and taxonomy structure. How it works Form trigger: Start the workflow by submitting a form with your website URL and the number of posts to analyze. Fetch WordPress data: The workflow sends three API requests to collect posts, categories, and tags. Merge data: It combines all the data into one stream using the Merge node. Code transformation: A Code node replaces category and tag IDs with their actual names. Google Sheets export: Posts are appended to a Google Sheet with the following columns: URL Title Categories Tags Completion form: Once the list is created, you’ll get a confirmation message and a link to your sheet. If the WordPress API isn’t available, the workflow automatically displays an error message to help you troubleshoot. Requirements A WordPress site with the REST API enabled (/wp-json/wp/v2/). A Google account connected to n8n with access to Google Sheets. A Google Sheet containing the columns: URL, Title, Categories, Tags. How to set up Import this workflow into n8n. Connect your Google Sheets account under credentials. Make sure your WordPress site’s API is accessible publicly. Adjust the Post limit (per_page) in the form node if needed. Run the workflow and check your Google Sheet for results. How to customize Add additional WordPress endpoints (e.g., authors, comments) by duplicating and modifying HTTP Request nodes. Replace Google Sheets with another integration (like Airtable or Notion). Extend the Code node to include SEO metadata such as meta descriptions or featured images.