by JJ Tham
Struggling with inaccurate Meta Ads tracking due to iOS 14+ and ad blockers? 📉 This workflow is your solution. It provides a robust, server-side endpoint to reliably send conversion events directly to the Meta Conversions API (CAPI). By bypassing the browser, you can achieve more accurate ad attribution and optimize your campaigns with better data. This template handles all the required data normalization, hashing, and formatting, so you can set up server-side tracking in minutes. ⚙️ How it works This workflow provides a webhook URL that you can send your conversion data to (e.g., from a web form, CRM, or backend). Once it receives the data, it: Sanitizes User Data: Cleans and normalizes PII like email and phone numbers. Hashes PII: Securely hashes the user data using SHA-256 to meet Meta's privacy requirements. Formats the Payload: Assembles all the data, including click IDs (fbc, fbp) and user info, into the exact format required by the Meta CAPI. Sends the Event: Makes a direct, server-to-server call to Meta, reliably logging your conversion event. 👥 Who’s it for? Performance Marketers: Improve ad performance and ROAS with more accurate conversion data. Lead Generation Businesses: Reliably track form submissions as conversions. E-commerce Stores: Send purchase events from your backend to ensure nothing gets missed. Developers: A ready-to-use template for implementing server-side tracking without writing custom code from scratch. 🛠️ How to set up Setup is straightforward. You'll need your Meta Pixel ID and a CAPI Access Token. For a complete walkthrough, check out the tutorial video for this workflow on YouTube: https://youtu.be/_fdMPIYEvFM The basic steps are to copy the webhook URL, configure your form or backend to send the correct data payload, and add your Meta Pixel ID and Access Token to the final HTTP Request node. 👉 For a detailed, step-by-step guide, please refer to the yellow sticky note inside the workflow.
by Rakin Jakaria
Use Cases Analyze e-commerce product pages for conversion optimization, audit SaaS landing pages for signup improvements, or evaluate marketing campaign pages for better lead generation. Good to know At time of writing, Google Gemini API calls have usage costs. See Google AI Pricing for current rates. The workflow analyzes publicly accessible pages only - pages behind login walls or with restricted access won't work. Analysis quality depends on page content structure - heavily image-based pages may receive limited text-based recommendations. How it works User submits a landing page URL through the form trigger interface. The HTTP Request node fetches the complete HTML content from the target landing page. Content is converted from HTML to markdown format for cleaner AI processing and better text extraction. Google Gemini 2.5 Flash analyzes the page using expert CRO knowledge and 2024 conversion best practices. The AI generates specific, actionable recommendations based on actual page content rather than generic advice. Information Extractor processes the analysis into 5 prioritized improvement tips with relevant visual indicators. Results are delivered through a completion form showing concrete steps to improve conversion rates. How to use The form trigger is configured for direct URL submission but can be replaced with webhook triggers for integration into existing websites or apps. Multiple pages can be analyzed sequentially, though each requires a separate workflow execution. Recommendations focus on high-impact changes that don't require heavy development work. Requirements Google Gemini (PaLM) API account for AI-powered analysis Publicly accessible landing pages for analysis N8N instance with proper webhook configuration Customizing this workflow CRO analysis can be tailored for specific industries by modifying the AI system prompt - try focusing on e-commerce checkout flows, SaaS trial conversions, or local business lead capture forms. Add competitive analysis by incorporating multiple URL inputs and comparative recommendations.
by Growth AI
French Public Procurement Tender Monitoring Workflow Overview This n8n workflow automates the monitoring and filtering of French public procurement tenders (BOAMP - Bulletin Officiel des Annonces des Marchés Publics). It retrieves tenders based on your preferences, filters them by market type, and identifies relevant opportunities using keyword matching. Who is this for? Companies seeking French public procurement opportunities Consultants monitoring specific market sectors Organizations tracking government contracts in France What it does The workflow operates in two main phases: Phase 1: Automated Tender Collection Retrieves all tenders from the BOAMP API based on your configuration Filters by market type (Works, Services, Supplies) Stores complete tender data in Google Sheets Handles pagination automatically for large datasets Phase 2: Intelligent Keyword Filtering Downloads and extracts text from tender PDF documents Searches for your specified keywords within tender content Saves matching tenders to a separate "Target" sheet for easy review Tracks processing status to avoid duplicates Requirements n8n instance (self-hosted or cloud) Google account with Google Sheets access Google Sheets API credentials configured in n8n Setup Instructions Step 1: Duplicate the Configuration Spreadsheet Access the template spreadsheet: Configuration Template Click File → Make a copy Save to your Google Drive Note the URL of your new spreadsheet Step 2: Configure Your Preferences Open your copied spreadsheet and configure the Config tab: Market Types - Check the categories you want to monitor: Travaux (Works/Construction) Services Fournitures (Supplies) Search Period - Enter the number of days to look back (e.g., "30" for the last 30 days) Keywords - Enter your search terms as a comma-separated list (e.g., "informatique, cloud, cybersécurité") Step 3: Import the Workflow Copy the workflow JSON from this template In n8n, click Workflows → Import from File/URL Paste the JSON and import Step 4: Update Google Sheets Connections Replace all Google Sheets node URLs with your spreadsheet URL: Nodes to update: Get config (2 instances) Get keyword Get Offset Get All Append row in sheet Update offset Reset Offset Ok Target offre For each node: Open the node settings Update the Document ID field with your spreadsheet URL Verify the Sheet Name matches your spreadsheet tabs Step 5: Configure Schedule Triggers The workflow has two schedule triggers: Schedule Trigger1 (Phase 1 - Tender Collection) Default: 0 8 1 * * (1st day of month at 8:00 AM) Adjust based on how frequently you want to collect tenders Schedule Trigger (Phase 2 - Keyword Filtering) Default: 0 10 1 * * (1st day of month at 10:00 AM) Should run after Phase 1 completes To modify: Open the Schedule Trigger node Click Cron Expression Adjust timing as needed Step 6: Test the Workflow Manually execute Phase 1 by clicking the Schedule Trigger1 node and selecting Execute Node Verify tenders appear in your "All" sheet Execute Phase 2 by triggering the Schedule Trigger node Check the "Target" sheet for matching tenders How the Workflow Works Phase 1: Tender Collection Process Configuration Loading - Reads your preferences from Google Sheets Offset Management - Tracks pagination position for API calls API Request - Fetches up to 100 tenders per batch from BOAMP Market Type Filtering - Keeps only selected market categories Data Storage - Formats and saves tenders to the "All" sheet Pagination Loop - Continues until all tenders are retrieved Offset Reset - Prepares for next execution Phase 2: Keyword Matching Process Keyword Loading - Retrieves search terms from configuration Tender Retrieval - Gets unprocessed tenders from "All" sheet Sequential Processing - Loops through each tender individually PDF Extraction - Downloads and extracts text from tender documents Keyword Analysis - Searches for matches with accent/case normalization Status Update - Marks tender as processed Match Evaluation - Determines if keywords were found Target Storage - Saves relevant tenders with match details Customization Options Adjust API Parameters In the HTTP Request node, you can modify: limit: Number of records per batch (default: 100) Additional filters in the where parameter Modify Keyword Matching Logic Edit the Get query node to adjust: Text normalization (accent removal, case sensitivity) Match proximity requirements Context length around matches Change Data Format Update the Format Results node to modify: Date formatting PDF URL generation Field mappings Spreadsheet Structure Your Google Sheets should contain these tabs: Config** - Your configuration settings Offset** - Pagination tracking (managed automatically) All** - Complete tender database Target** - Filtered tenders matching your keywords Troubleshooting No tenders appearing in "All" sheet: Verify your configuration period isn't too restrictive Check that at least one market type is selected Ensure API is accessible (test the HTTP Request node) PDF extraction errors: Some PDFs may be malformed or protected Check the URL generation in Format Results node Verify PDF URLs are accessible in a browser Duplicate tenders in Target sheet: Ensure the "Ok" status is being written correctly Check the Filter node is excluding processed tenders Verify row_number matching in update operations Keywords not matching: Keywords are case-insensitive and accent-insensitive Verify your keywords are spelled correctly Check the extracted text contains your terms Performance Considerations Phase 1 processes 100 tenders per iteration with a 10-second wait between batches Phase 2 processes tenders sequentially to avoid overloading PDF extraction Large datasets (1000+ tenders) may take significant time to process Consider running Phase 1 less frequently if tender volume is manageable Data Privacy All data is stored in your Google Sheets No external databases or third-party storage BOAMP API is publicly accessible (no authentication required) Ensure your Google Sheets permissions are properly configured Support and Updates This workflow retrieves data from the BOAMP public API. If API structure changes, nodes may require updates. Monitor the workflow execution logs for errors and adjust accordingly.
by gotoHuman
Collaborate with an AI Agent on a joint document, e.g. for creating your content marketing strategy, a sales plan, project status updates, or market analysis. The AI Agent generates markdown text that you can review and edit it in gotoHuman, and only then is the existing Google Doc updated. In this example we use AI to update our company's content strategy for the next quarter. How It Works The AI Agent has access to other documents that provide enough context to write the content strategy. We ask it to generate the text in markdown format. To ensure our strategy document is not changed without our approval, we request a human review using gotoHuman. There the markdown content can be edited and properly previewed. Our workflow resumes once the review is completed. We check if the content was approved and then write the (potentially edited) markdown to our Google Docs file via the Google Drive node. How to set up Most importantly, install the verified gotoHuman node before importing this template! (Just add the node to a blank canvas before importing. Works with n8n cloud and self-hosted) Set up your credentials for gotoHuman, OpenAI, and Google Docs/Drive In gotoHuman, select and create the pre-built review template "Strategy agent" or import the ID: F4sbcPEpyhNKBKbG9C1d Select this template in the gotoHuman node Requirements You need accounts for gotoHuman (human supervision) OpenAI (Doc writing) Google Docs/Drive How to customize Let the workflow run on a schedule, or create and connect a manual trigger in gotoHuman that lets you capture additional human input to feed your agent Provide the agent with more context to write the content strategy Use the gotoHuman response (or a Google Drive file change trigger) to run additional AI agents that can execute on the new strategy
by Atta
This workflow automatically turns any YouTube video into a structured blog post with Gemini AI. By sending a simple POST request with a YouTube URL to a webhook, it downloads the video’s audio, transcribes the content, and generates a blog-ready article with a title, description, tags, and category. The final result, along with the full transcript and original video URL, is delivered to your chosen webhook or CMS. How it works: The workflow handles the entire process of transforming YouTube videos into complete blog posts using Gemini AI transcription and structured text generation. Once triggered, it: Downloads the video’s audio Transcribes the spoken content into text Generates a blog post in the same language as the video’s original language Creates: A clear and engaging title A short description Suggested category and tags The full transcript of the video The original YouTube video URL This makes it easy to repurpose video content into publish-ready articles in minutes. This template is ideal for content creators, marketers, educators, and bloggers who want to quickly turn video content into written posts without manual transcription or editing. Setup Instructions Install yt-dlp on your local machine or server where n8n runs. This is required to download YouTube audio. Get a Google Gemini API key and configure it in your AI nodes. Webhook Input Configuration: Endpoint: The workflow starts with a Webhook Trigger. Method: POST Example Request Body: { "videoUrl": "https://www.youtube.com/watch?v=lW5xEm7iSXk" } Configure Output Webhook: Add your target endpoint in the last node where the blog post JSON is sent. This could be your CMS, a Notion database, or another integration. Customization Guidance Writing Style:** Update the AI Agent’s prompt to adjust tone (e.g., casual, professional, SEO-optimized). Metadata:** Modify how categories and tags are generated to fit your website’s taxonomy. Integration:** Swap the final webhook with WordPress, Ghost, Notion, or Slack to fit your publishing workflow. Transcript Handling:** Save the full transcript separately if you also want searchable video archives.
by Cheng Siong Chin
How It Works This workflow automates document authenticity verification by combining AI-based content analysis with immutable blockchain records. It is built for compliance teams, legal departments, supply chain managers, and regulators who need tamper-proof validation and auditable proof. The solution addresses the challenge of detecting forged or altered documents while producing verifiable evidence that meets legal and regulatory standards. Documents are submitted via webhook and processed through PDF content extraction. Anthropic’s Claude analyzes the content for authenticity signals such as inconsistencies, anomalies, and formatting issues, returning structured authenticity scores. Verified documents trigger blockchain record creation and publication to a distributed ledger, with cryptographic proofs shared automatically with carriers and regulators through HTTP APIs. Setup Steps Configure webhook endpoint URL for document submission Add Anthropic API key to Chat Model node for AI Set up blockchain network credentials in HTTP nodes for record preparation Connect Gmail account and specify compliance team email addresses Customize authenticity thresholds Prerequisites Anthropic API key, blockchain network access and credentials Use Cases Supply chain documentation verification for import/export compliance Customization Adjust AI prompts for industry-specific authenticity criteria Benefits Eliminates manual document review time while improving fraud detection accuracy
by Khairul Muhtadin
The Prompt converter workflow tackles the challenge of turning your natural language video ideas into perfectly formatted JSON prompts tailored for Veo 3 video generation. By leveraging Langchain AI nodes and Google Gemini, this workflow automates and refines your input to help you create high-quality videos faster and with more precision—think of it as your personal video prompt translator that speaks fluent cinematic! 💡 Why Use Prompt Converter? Save time: Automate converting complex video prompts into structured JSON, cutting manual formatting headaches and boosting productivity. Avoid guesswork: Eliminate unclear video prompt details by generating detailed, cinematic descriptions that align perfectly with Veo 3 specs. Improve output quality: Optimize every parameter for Veo 3's video generation model to get realistic and stunning results every time. Gain a creative edge: Turn vague ideas into vivid video concepts with AI-powered enhancement—your video project's secret weapon. ⚡ Perfect For Video creators: Content developers wanting quick, precise video prompt formatting without coding hassles. AI enthusiasts: Developers and hobbyists exploring Langchain and Google Gemini for media generation. Marketing teams: Professionals creating video ads or visuals who need consistent prompt structuring that saves time. 🔧 How It Works ⏱ Trigger: User submits a free text prompt via message or webhook. 📎 Process: The text goes through an AI model that understands and reworks it into detailed JSON parameters tailored for Veo 3. 🤖 Smart Logic: Langchain nodes parse and optimize the prompt with cinematic details, set reasonable defaults, and structure the data precisely. 💌 Output: The refined JSON prompt is sent to Google Gemini for video generation with optimized settings. 🔐 Quick Setup Import the JSON file to your n8n instances Add credentials: Azure OpenAI, Gemini API, OpenRouter API Customize: Adjust prompt templates or default parameters in the Prompt converter node Test: Run your workflow with sample text prompts to see videos come to life 🧩 You'll Need Active n8n instances Azure OpenAI API Gemini API Key OpenRouter API (alternative AI option) 🛠️ Level Up Ideas Add integration with video hosting platforms to auto-upload generated videos 🧠 Nodes Used Prompt Input** (Chat Trigger) OpenAI** (Azure OpenAI GPT model) Alternative** (OpenRouter API) Prompt converter** (Langchain chain LLM for JSON conversion) JSON parser** (structured output extraction) Generate a video** (Google Gemini video generation) Made by: Khaisa Studio Tags: video generation, AI, Langchain, automation, Google Gemini Category: Video Production Need custom work? Contact me
by WeblineIndia
Fill iOS localization gaps from .strings → Google Sheets and PR with placeholders (GitHub) This n8n workflow automatically identifies missing translations in .strings files across iOS localizations (e.g., Base.lproj vs fr.lproj) and generates a report in Google Sheets. Optionally, it creates a GitHub PR to insert placeholder strings ("TODO_TRANSLATE") so builds don't fail. Supports DRY\_RUN mode. Who’s it for iOS teams who want fast feedback on missing translations. Localization managers who want a shared sheet to assign work to translators. How it works A GitHub Webhook triggers on push or pull request. The iOS repo is scanned for .strings files under Base.lproj or en.lproj and their target-language counterparts. It compares keys and identifies what’s missing. A new or existing Google Sheet tab (e.g., fr) is updated with missing entries. If enabled, it creates a GitHub PR with placeholder keys (e.g., "TODO_TRANSLATE"). How to set up Import the Workflow JSON into your n8n instance. Set Config Node values like: { "GITHUB_OWNER": "your-github-user-name", "GITHUB_REPO": "your-iOS-repo-name", "BASE_BRANCH": "develop", "SHEET_ID": "<YOUR_GOOGLE_SHEET_ID>", "ENABLE_PR": "true", "IOS_SOURCE_GLOB": "/Base.lproj/*.strings,/en.lproj/*.strings", "IOS_TARGET_GLOB": "*/.lproj/*.strings", "PLACEHOLDER_VALUE": "TODO_TRANSLATE", "BRANCH_TEMPLATE": "chore/l10n-gap-{{YYYYMMDD}}", } Create GitHub Webhook URL: https://your-n8n-instance/webhook/l10n-gap-ios Content-Type: application/json Trigger on: Push, Pull Request Connect credentials GitHub token with repo scope Google Sheets API (Optional) Slack OAuth + SMTP Requirements | Tool | Needed For | Notes | | ---------------- | -------------------- | ---------------------------------------- | | GitHub Repo | Webhook, API for PRs | repo token or App | | Google Sheets | Sheet output | Needs valid SHEET_ID or create-per-run | | Slack (optional) | Notifications | chat:write scope | | SMTP (optional) | Email fallback | Standard SMTP creds | How to customize Multiple Locales**: Add comma-separated values to TARGET_LANGS_CSV (e.g., fr,de,es). Globs**: Adjust IOS_SOURCE_GLOB and IOS_TARGET_GLOB to scan only certain modules or file patterns. Ignore Rules**: Add IGNORE_KEY_PREFIXES_CSV to skip certain internal/debug strings. Placeholder Value**: Change PLACEHOLDER_VALUE to something meaningful like "@@@". Slack/Email**: Set SLACK_CHANNEL and EMAIL_FALLBACK_TO_CSV appropriately. DRY\_RUN**: Set to true to skip GitHub PR creation but still update the sheet. Add‑ons Android support:** Add a second path for strings.xml (values → values-<lang>), same diff → Sheets → placeholder PR. Multiple languages at once:** Expand TARGET_LANGS_CSV and loop tabs + placeholder commits per locale. .stringsdict handling:** Validate plural/format entries and open a precise PR. Translator DMs:** Provide a LANG → Slack handle/email map to DM translators with their specific file/key counts. GitLab/Bitbucket variants:** Replace GitHub API calls with GitLab/Bitbucket equivalents to open Merge Requests. Use Case Examples Before a test build, ensure fr has all keys present—placeholders keep the app compiling. Weekly run creates a single sheet for translators and a PR with placeholders, avoiding last‑minute breakages. A new screen adds 12 strings; the bot flags and pre‑fills them across locales. Common troubleshooting | Issue | Possible Cause | Solution | | ------------------------ | --------------------------------------------- | ------------------------------------------------------ | | No source files found | Glob doesn't match Base.lproj or en.lproj | Adjust IOS_SOURCE_GLOB | | Target file missing | fr.lproj doesn’t exist yet | Will be created in placeholder PR | | Parsing skips entries | Non-standard string format in file | Ensure proper .strings format "key" = "value"; | | Sheet not updating | SHEET_ID missing or insufficient permission | Add valid ID or allow write access | | PR not created | ENABLE_PR=false or no missing keys | Enable PR and ensure at least one key is missing | | Slack/Email not received | Missing credentials or config | Configure Slack/SMTP properly and set recipient fields | Need Help? Want to expand this for Android? Loop through 5+ locales at once? Or replace GitHub with GitLab? Contact our n8n Team at WeblineIndia with your repo & locale setup and we’ll help tailor it to your translation workflow!
by Anirudh Aeran
This workflow provides a complete backend solution for building your own WhatsApp marketing dashboard. It enables you to send dynamic, personalized, and rich-media broadcast messages to an entire contact list stored in Google Sheets. The system is built on three core functions: automatically syncing your approved Meta templates, providing an API endpoint for your front-end to fetch those templates, and a powerful broadcast engine that merges your contact data with the selected template for mass delivery. Who’s it for? This template is for marketers, developers, and businesses who want to run sophisticated WhatsApp campaigns without being limited by off-the-shelf tools. It's perfect for anyone who needs to send personalized bulk messages with dynamic content (like unique images or links for each user) and wants to operate from a simple, custom-built web interface. How it works This workflow is composed of three independent, powerful parts: Automated Template Sync: A scheduled trigger runs periodically to fetch all of your approved message templates directly from your Meta Business Account. It then clears and updates an n8n Data Table, ensuring your list of available templates is always perfectly in sync with Meta. Front-end API Endpoint: A dedicated webhook acts as an API for your dashboard. When your front-end calls this endpoint, it returns a clean JSON list of all available templates from the n8n Data Table, which you can use to populate a dropdown menu for the user. Dynamic Broadcast Engine: The main webhook listens for a request from your front-end, which includes the name of the template to send. It then: Looks up the template's structure in the Data Table. Fetches all contacts from your Google Sheet. For each contact, a Code node dynamically constructs a personalized API request. It can merge the contact's name into the body, add a unique user ID to a button's URL, and even pull a specific image URL from your Google Sheet to use as a dynamic header. Sends the fully personalized message to the contact. How to set up Pre-requisite - Front-end: This workflow is a backend and is designed to be triggered by a front-end application. You will need a simple UI with a dropdown to select a template and a button to trigger the broadcast. Meta for Developers: You need a Meta App with the WhatsApp Business API configured. From your app, you will need your WhatsApp Business Account ID, a Phone Number ID, and a permanent System User Access Token. n8n Data Table: Create an n8n Data Table (e.g., named "WhatsApp Templates") with the following columns: template_name, language_code, components_structure, template_id, status, category. Google Sheet: Create a Google Sheet to store your contacts. It must have columns like Phone Number, Full Name, and for dynamic images, Marketing Image URL. Configure Credentials: -> Create an HTTP Header Auth credential in n8n for WhatsApp. Use Authorization as the Header Name and Bearer YOUR_PERMANENT_TOKEN as the value. -> Add your Google Sheets credentials. Configure Nodes: -> In both HTTP Request nodes, select your WhatsApp Header Auth credential. Update the URLs with your own Phone Number ID and WABA ID. -> In the Google Sheets node, select your credential and enter the Sheet ID. -> In all Data Table nodes, select the Data Table you created. First Run: Manually execute the "Sync Meta Templates" flow (starting with the Schedule Trigger) once to populate your Data Table with your templates. Activate: Activate all parts of the workflow. Requirements A Meta for Developers account with a configured WhatsApp Business App. A permanent System User Access Token for the WhatsApp Business API. A Google Sheets account. A front-end application/dashboard to trigger the workflow.
by Daniel
Harness OpenAI's Sora 2 for instant video creation from text or images using fal.ai's API—powered by GPT-5 for refined prompts that ensure cinematic quality. This template processes form submissions, intelligently routes to text-to-video (with mandatory prompt enhancement) or image-to-video modes, and polls for completion before redirecting to your generated clip. 📋 What This Template Does Users submit prompts, aspect ratios (9:16 or 16:9), models (sora-2 or pro), durations (4s, 8s, or 12s), and optional images via a web form. For text-to-video, GPT-5 automatically refines the prompt for optimal Sora 2 results; image mode uses the raw input. It calls one of four fal.ai endpoints (text-to-video, text-to-video/pro, image-to-video, image-to-video/pro), then loops every 60s to check status until the video is ready. Handles dual modes: Text (with GPT-5 enhancement) or image-seeded generation Supports pro upgrades for higher fidelity and longer clips Auto-uploads images to a temp host and polls asynchronously for hands-free results Redirects directly to the final video URL on completion 🔧 Prerequisites n8n instance with HTTP Request and LangChain nodes enabled fal.ai account for Sora 2 API access OpenAI account for GPT-5 prompt refinement 🔑 Required Credentials fal.ai API Setup Sign up at fal.ai and navigate to Dashboard → API Keys Generate a new key with "sora-2" permissions (full access recommended) In n8n, create "Header Auth" credential: Name it "fal.ai", set Header Name to "Authorization", Value to "Key [Your API Key]" OpenAI API Setup Log in at platform.openai.com → API Keys (top-right profile menu) Click "Create new secret key" and copy it (store securely) In n8n, add "OpenAI API" credential: Paste key, select GPT-5 model in the LLM node ⚙️ Configuration Steps Import the workflow JSON into your n8n instance via Settings → Import from File Assign fal.ai and OpenAI credentials to the relevant HTTP Request and LLM nodes Activate the workflow—the form URL auto-generates in the trigger node Test by submitting a sample prompt (e.g., "A cat chasing a laser"); monitor executions for video output Adjust polling wait (60s node) for longer generations if needed 🎯 Use Cases Social Media Teams**: Generate 9:16 vertical Reels from text ideas, like quick product animations enhanced by GPT-5 for professional polish Content Marketers**: Animate uploaded images into 8s promo clips, e.g., turning a static ad graphic into a dynamic story for email campaigns Educators and Trainers**: Create 4s explainer videos from outlines, such as historical reenactments, using pro mode for detailed visuals App Developers**: Embed as a backend service to process user prompts into Sora 2 videos on-demand for creative tools ⚠️ Troubleshooting API quota exceeded**: Check fal.ai dashboard for usage limits; upgrade to pro tier or extend polling waits Prompt refinement fails**: Ensure GPT-5 credential is set and output matches JSON schema—test LLM node independently Image upload errors**: Confirm file is JPG/PNG under 10MB; verify tmpfiles.org endpoint with a manual curl test Endless polling loop**: Add an IF node after 10 checks to timeout; increase wait to 120s for 12s pro generations
by Grigory Frolov
📊 YouTube Personal Channel Videos → Google Sheets Automatically sync your YouTube videos (title, description, tags, publish date, captions, etc.) into Google Sheets — perfect for creators and marketers who want a clean content database for analysis or reporting. 🚀 What this workflow does ✅ Connects to your personal YouTube channel via Google OAuth 🔁 Fetches all uploaded videos automatically (with pagination) 🏷 Extracts metadata: title, description, tags, privacy status, upload status, thumbnail, etc. 🧾 Retrieves captions (SRT format) if available 📈 Writes or updates data in your Google Sheets document ⚙️ Can be run manually or scheduled via Cron 🧩 Nodes used Manual Trigger** — to start manually or connect with Cron HTTP Request (YouTube API v3)** — fetches channel, uploads, and captions Code Nodes** — manage pagination and collect IDs SplitOut** — iterates through video lists Google Sheets (appendOrUpdate)** — stores data neatly If Conditions** — control data flow and prevent empty responses ⚙️ Setup guide Connect your Google Account Used for both YouTube API and Google Sheets. Make sure the credentials are set up in Google OAuth2 API and Google Sheets OAuth2 API nodes. Create a Google Sheet Add a tab named Videos. Add these columns: youtube_id | title | description | tags | privacyStatus | uploadStatus | thumbnail | captions You can also include categoryId, maxres, or published if you’d like. Replace the sample Sheet ID In each Google Sheets node, open the “Spreadsheet” field and choose your own document. Make sure the sheet name matches the tab name (Videos). Run the workflow Execute it manually first to pull your latest uploads. Optionally add a Cron Trigger node for daily sync (e.g., once per day). Check your Sheet Your data should appear instantly — with each video’s metadata and captions (if available). 🧠 Notes & tips ⚙️ The flow loops through all pages of your upload playlist automatically — no manual pagination needed. 🕒 The workflow uses YouTube’s “contentDetails.relatedPlaylists.uploads” to ensure you only fetch your own uploads. 💡 Captions fetch may fail for private videos — use “Continue on Fail” if you want the rest to continue. 🧮 Ideal for dashboards, reporting sheets, SEO analysis, or automation triggers. 💾 To improve speed, you can disable the “Captions” branch if you only need metadata. 👥 Ideal for 🎬 YouTube creators maintaining a video database 📊 Marketing teams tracking SEO performance 🧠 Digital professionals building analytics dashboards ⚙️ Automation experts using YouTube data in other workflows 💛 Credits Created by Grigory Frolov YouTube: @gregfrolovpersonal More workflows and guides → ozwebexpert.com/n8n
by AI/ML API | D1m7asis
Who’s it for Teams and makers who want a plug-and-play vision bot: users send a photo in Telegram, the bot returns a concise description plus OCR text. No custom servers required—just n8n, a Telegram bot, and an AIMLAPI key. What it does / How it works The workflow listens for new Telegram messages, fetches the highest-resolution photo, converts it to base64, normalizes the MIME type, and calls AIMLAPI (GPT-4o Vision) via the HTTP Request node using the OpenAI-compatible messages format with an image_url data URI. The model returns a short caption and extracted text. The answer is sent back to the same Telegram chat. Requirements n8n instance (self-hosted or cloud) Telegram bot token (from @BotFather) AIMLAPI account and API key (OpenAI-compatible endpoint) How to set up Create a Telegram bot with @BotFather and copy the token. In n8n, add Telegram credentials (no hardcoded tokens in nodes). Add AIMLAPI credentials with your API key (base URL: https://api.aimlapi.com/v1). Import the workflow JSON and connect credentials in the nodes. Execute the trigger and send a photo to your bot to test. How to customize the workflow Modify the vision prompt (e.g., add brand, language, or formatting rules). Switch models within AIMLAPI (any vision-capable model using the same messages schema). Add an IF branch for text-only messages (reply with guidance). Log usage to Google Sheets or a database (user id, file id, response). Add rate limits, user allowlists, or Markdown formatting in Telegram responses. Increase timeouts/retries in the HTTP Request node for long-running images.