by SpaGreen Creative
Send Automatic WhatsApp Order Confirmations from Shopify with Rapiwa API Who’s it for This n8n workflow helps Shopify store owners and teams automatically confirm orders via WhatsApp. It checks if the customer's number is valid using Rapiwa API, sends a personalized message, and logs every attempt in Google Sheets—saving time and reducing manual work. Whether you're a solo entrepreneur or managing a small team, this solution gives you a low-cost alternative to the official WhatsApp Business API, without losing control or personalization. Features Receives new order details via webhook upon order creation or update. Iterates over incoming data in manageable batches for smoother processing. Extracts and formats customer and order details from the Shopify webhook payload. Strips non-numeric characters from WhatsApp numbers for consistent formatting. Uses Rapiwa API to check if the WhatsApp number is valid and active. Branches the workflow based on number validity — separates verified from unverified. Sends a custom WhatsApp confirmation message to verified customers using Rapiwa. Updates Google Sheet rows with status and validity How it Works / What It Does Triggered by a Shopify webhook or by reading rows from a Google Sheet. Normalizes and cleans** the order payload. Extracts details like customer name, phone, items, shipping, and payment info. Cleans phone numbers (removes special characters). Verifies if the number is registered on WhatsApp via Rapiwa API. If valid: Sends a templated WhatsApp message. Updates Google Sheet with validity = verified and status = sent. If invalid: Skips sending. Updates sheet with validity = unverified and status = not sent. Adds wait/delay between sends to prevent rate limits. Keeps an audit trail in the connected Google Sheet. How to Set Up Set up a Shopify webhook for new orders (or connect a Google Sheet). Create a Google Sheet with columns: name, number, order id, item name, total price, validity, status Create and configure a Rapiwa Bearer token in n8n. Add Google Sheets OAuth2 credential in n8n. Import the workflow in n8n and configure these nodes: Webhook or Sheet Trigger Loop Over Items (SplitInBatches) Normalize Payload (Code) Clean WhatsApp Number (Code) Rapiwa WhatsApp Check (HTTP Request) Conditional Branch (If) Send WhatsApp Message (HTTP Request) Update Google Sheet (Google Sheets) Wait Node (delay per send) Requirements Shopify store with order webhook enabled (or order list in Google Sheet) A verified Rapiwa API token A working n8n instance with HTTP and Google Sheets nodes enabled A Google Sheet with required structure and valid OAuth credentials in n8n How to Customize the Workflow Modify the message template with your own brand tone or emojis. Add country-code logic in the Clean Number node if needed. Use a unique order id in your Google Sheet to prevent mismatches. Increase or decrease delay in the Wait node (e.g., 5–10 seconds). Use additional logic in Code nodes to handle discounts, promotions, or more line items. Workflow Highlights Triggered by Shopify webhook update. Receiving new order data form Shopify using webhook Cleans and extracts order data from raw payload. Normalizing and validating the customer’s WhatsApp number using the Rapiwa API Verifies WhatsApp number using Rapiwa's verify-whatsapp endpoint. Sends order confirmation via Rapiwa's send-message endpoint. Logs every result into Google Sheets (verified/unverified + sent/not sent). Setup in n8n 1. Check WhatsApp Registration Use an HTTP Request node: URL: https://app.rapiwa.com/api/verify-whatsapp Method: POST Auth: httpBearerAuth using your Rapiwa token Body: { "number": "cleaned_number" } 2. Branch Based on Validity Use an If node: Condition: {{ $json.data.exists }} == true (or "true" if string) 3. Send Message via Rapiwa Endpoint: https://app.rapiwa.com/api/send-message Method: POST Body: Hi {{ $json.customer_full_name }}, Thank you for shopping with SpaGreen Creative! We're happy to confirm that your order has been successfully placed. 🧾 Order Details • Product: {{ $json.line_item.title }} • SKU: {{ $json.line_item.sku }} • Quantity: {{ $json.line_item.quantity }} • Vendor: {{ $json.line_item.vendor }} • Order ID: {{ $json.name }} • Product ID: {{ $json.line_item.product_id }} 📦 Shipping Information {{ $json.shipping_address.address1 }} {{ $json.shipping_address.address2 }} {{ $json.shipping_address.city }}, {{ $json.shipping_address.country }} - {{ $json.shipping_address.zip }} 💳 Payment Summary • Subtotal: {{ $json.subtotal_price }} BDT • Tax (VAT): {{ $json.total_tax_amount }} BDT • Shipping: {{ $json.total_shipping_amount }} BDT • Discount: {{ $json.total_discount_amount }} BDT • Total Paid: {{ $json.total_price }} BDT Order Date: {{ $json.created_date }} Warm wishes, Team SpaGreen Creative Sample Google Sheet Structure A Google Sheet** formatted like this ➤ Sample | name | number | order id | item name | total price | validity | status | | ----------- | ------------- | ------------- | ------------------------------ | ----------- | -------- | ------ | | Abdul Mannan | 8801322827799| 8986469695806 | Iphone 10 | 1150 | verified | sent | | Abdul Mannan | 8801322827799| 8986469695806 | S25 UltraXXXXeen Android Phone | 23000 | verified | sent | Tips Always ensure phone numbers have a country code (e.g., 880 for BD). Clean numbers with regex: replace(/\D/g, '') Adjust Rapiwa API response parsing depending on actual structure (true vs "true"). Use row_number for sheet updates, or unique order id for better targeting. Use the Wait node to add 3–10 seconds between sends. Important Notes Avoid reordering sheet rows—updates rely on consistent row_number. shopify-app-auth is the credential name used in the export—make sure it's your Rapiwa token. Use a test sheet before going live. Rapiwa has request limits—avoid rapid sending. Add media/image logic later using message_type: media. Future Enhancements (Ideas) Add Telegram/Slack alert once the batch finishes. Include media (e.g., product image, invoice) in the message. Detect and resend failed messages. Integrate with Shopify’s GraphQL API for additional data. Auto-mark fulfillment status based on WhatsApp confirmation. Support & Community WhatsApp: 8801322827799 Discord: discord Facebook Group: facebook group Website: https://spagreen.net Envato/Codecanyon: codecanyon portfolio
by Chris Pryce
Overview This workflow streamlines the process of setting up a chat-bot using the Signal Messager API. What this is for Chat-bot applications have become very popular on Whatsapp and Telegram. However, security conscious people may be hesitant to connect their AI agents to these applications. Compared to Whatsapp and Telegram, the Signal messaging app is more secure and end-to-end encrypted by default. In part because of this, it is more difficult to create a chat-bot application in this app. However, this is still possible to do if you host your own Signal API endpoint. This workflow requires the installation of a community-node package. Some additional setup for the locally hosted Signal API endpoint is also necessary. As such, it will only work with self-hosted instances of n8n. You may use any AI model you wish for this chat-bot, and connect different tools and APIs depending on your use-case. How to setup Step 1: Setup Rest API Before implementing this workflow, you must setup a local Signal Client Rest API. This can be done using a docker container based on this project: bbernhard/signal-cli-rest-api. version: "3" services: signal-cli-rest-api: image: bbernhard/signal-cli-rest-api:latest environment: MODE=normal #supported modes: json-rpc, native, normal #- AUTO_RECEIVE_SCHEDULE=0 22 * * * #enable this parameter on demand (see description below) ports: "8080:8080" #map docker port 8080 to host port 8080. volumes: "./signal-cli-config:/home/.local/share/signal-cli" #map "signal-cli-config" folder on host system into docker container. the folder contains the password and cryptographic keys when a new number is registered After starting the docker-container, you will be able to interact with a local Signal client over a Rest API at http://localhost:8080 (by default, this setting can be modified in the docker-compose file). Step 2: Install Node Package This workflow requires the community-node package developed by ZBlaZe: n8n-nodes-signal-cli-rest-api. Navigate to ++your-n8n-server-address/settings/community-nodes++, click the 'Install' button, and paste in the communiy-node package name '++n8n-nodes-signal-cli-rest-api++' to install this community node. Step 3: Register and Verify Account The last step requires a phone-number. You may use your own phone-number, a pre-paid sim card, or (if you are a US resident), a free Google Voice digital phone-number. An n8n web-form has been created in this workflow to make headless setup easier. In the Form nodes, replace the URL with the address of your local Signal Rest API endpoint. Open the webform and enter the phone number you are using to register your bot's Signal account Signal needs to verify you are human before registering an account. Visit this page to complete the captcha challenge. The right-click the 'Open Signal' button and copy the link address. Paste this into the second form field and hit 'Submit'. At this point you should receive a verification token as an SMS message to the phone-number you are using. Copy this and paste it into the second web-form. Your bot's signal account should be setup now. To use this account in n8n, add the Rest-API address and account-number (phone-number) as a new n8n credential. Step 4: Optional For extra security it is recommended to restrict communication with this chat-bot. In the 'If' workflow node, enter your own signal account phone-number. You may also provide a UUID. This is an identifier number unique to your mobile device. You can find this by sending a test message to your bot's signal account and copying it from the workflow execution data.
by Evoort Solutions
📺 Automated YouTube Video Metadata Extraction Workflow Description: This workflow leverages the YouTube Metadata API to automatically extract detailed video information from any YouTube URL. It uses n8n to automate the entire process and stores the metadata in a neatly formatted Google Docs document. Perfect for content creators, marketers, and researchers who need quick, organized YouTube video insights at scale. ⚙️ Node-by-Node Explanation 1. ✅ On Form Submission This node acts as the trigger. When a user submits a form containing a YouTube video URL, the workflow is activated. Input: YouTube Video URL Platform: Webhook or n8n Form Trigger 2. 🌐 YouTube Metadata API (HTTP Request) This node sends the video URL to the YouTube Metadata API via HTTP request. Action: GET request Headers: -H "X-RapidAPI-Key: YOUR_API_KEY" -H "X-RapidAPI-Host: youtube-metadata1.p.rapidapi.com" Endpoint Example: https://youtube-metadata1.p.rapidapi.com/video?url=YOUTUBE_VIDEO_URL Output: JSON with metadata like: Title Description Views, Likes, Comments Duration Upload Date Channel Info Thumbnails 3. 🧠 Reformat Metadata (Code Node) This node reformats the raw metadata into a clean, human-readable text block. Example Output Format: 🎬 Title: How to Build Workflows with n8n 🧾 Description: This tutorial explains how to build... 👤 Channel: n8n Tutorials 📅 Published On: 2023-05-10 ⏱️ Duration: 10 minutes, 30 seconds 👁️ Views: 45,678 👍 Likes: 1,234 💬 Comments: 210 🔗 URL: https://youtube.com/watch?v=abc123 4. 📝 Append to Google Docs This node connects to your Google Docs and appends the formatted metadata into a selected document. Document Format Example:** 📌 Video Entry – [Date] 🎬 Title: 🧾 Description: 👤 Channel: 📅 Published On: ⏱️ Duration: 👁️ Views: 👍 Likes: 💬 Comments: 🔗 URL: --- 📄 Use Cases Content Creators**: Quickly analyze competitor content or inspirations. Marketers**: Collect campaign video performance data. Researchers**: Compile structured metadata across videos. Social Media Managers**: Create content briefs effortlessly. ✅ Benefits 🚀 Time-saving: Automates manual video data extraction 📊 Accurate: Uses reliable, updated YouTube API 📁 Organized: Formats and stores data in Google Docs 🔁 Scalable: Handles unlimited YouTube URLs 🎯 User-friendly: Simple setup and clean output 🔑 How to Get Your API Key for YouTube Metadata API Go to the YouTube Metadata API on RapidAPI. Sign up or log in to your RapidAPI account. Click Subscribe to Test and choose a pricing plan (free or paid). Copy your API Key shown in the "X-RapidAPI-Key" section. Use it in your HTTP request headers. 🧰 Google Docs Integration – Full Setup Instructions 🔐 Step 1: Enable Google Docs API Go to the Google Cloud Console. Create a new project or select an existing one. Navigate to APIs & Services > Library. Search for Google Docs API and click Enable. Also enable Google Drive API (for document access). 🛠 Step 2: Create OAuth Credentials Go to APIs & Services > Credentials. Click Create Credentials > OAuth Client ID. Select Web Application or Desktop App. Add authorized redirect URIs if needed (e.g., for n8n OAuth). Save your Client ID and Client Secret. 🔗 Step 3: Connect n8n to Google Docs In n8n, go to Credentials > Google Docs API. Add new credentials using the Client ID and Secret from above. Authenticate with your Google account and allow access. 📘 Step 4: Create and Format Your Google Document Go to Google Docs and create a new document. Name it (e.g., YouTube Metadata Report). Optionally, add a title or table of contents. Copy the Document ID from the URL: https://docs.google.com/document/d/DOCUMENT_ID/edit 🔄 Step 5: Use Append Content to Document Node in n8n Use the Google Docs node in n8n with: Operation: Append Content Document ID: Your copied Google Doc ID Content: The formatted video summary string 🎨 Customization Options 💡 Add Tags: Insert hashtags or categories based on video topics. 📆 Organize by Date: Create headers for each day or week’s entries. 📸 Embed Thumbnails: Use thumbnail_url to embed preview images. 📊 Spreadsheet Export: Use Google Sheets instead of Docs if preferred. 🛠 Troubleshooting Tips | Issue | Solution | | ------------------------------ | ------------------------------------------------------------------- | | ❌ Auth Error (Google Docs) | Ensure correct OAuth redirect URI and permissions. | | ❌ API Request Fails | Check API key and request structure; test on RapidAPI's playground. | | 📄 Doc Not Updating | Verify Document ID and sharing permissions. | | 🧾 Bad Formatting | Debug the code node output using logging or console in n8n. | | 🌐 n8n Timeout | Consider using Wait or Split In Batches for large submissions. | 🚀 Ready to Launch? You can deploy this workflow in just minutes using n8n. 👉 Start Automating with n8n
by Rakin Jakaria
Use Cases Analyze e-commerce product pages for conversion optimization, audit SaaS landing pages for signup improvements, or evaluate marketing campaign pages for better lead generation. Good to know At time of writing, Google Gemini API calls have usage costs. See Google AI Pricing for current rates. The workflow analyzes publicly accessible pages only - pages behind login walls or with restricted access won't work. Analysis quality depends on page content structure - heavily image-based pages may receive limited text-based recommendations. How it works User submits a landing page URL through the form trigger interface. The HTTP Request node fetches the complete HTML content from the target landing page. Content is converted from HTML to markdown format for cleaner AI processing and better text extraction. Google Gemini 2.5 Flash analyzes the page using expert CRO knowledge and 2024 conversion best practices. The AI generates specific, actionable recommendations based on actual page content rather than generic advice. Information Extractor processes the analysis into 5 prioritized improvement tips with relevant visual indicators. Results are delivered through a completion form showing concrete steps to improve conversion rates. How to use The form trigger is configured for direct URL submission but can be replaced with webhook triggers for integration into existing websites or apps. Multiple pages can be analyzed sequentially, though each requires a separate workflow execution. Recommendations focus on high-impact changes that don't require heavy development work. Requirements Google Gemini (PaLM) API account for AI-powered analysis Publicly accessible landing pages for analysis N8N instance with proper webhook configuration Customizing this workflow CRO analysis can be tailored for specific industries by modifying the AI system prompt - try focusing on e-commerce checkout flows, SaaS trial conversions, or local business lead capture forms. Add competitive analysis by incorporating multiple URL inputs and comparative recommendations.
by Growth AI
French Public Procurement Tender Monitoring Workflow Overview This n8n workflow automates the monitoring and filtering of French public procurement tenders (BOAMP - Bulletin Officiel des Annonces des Marchés Publics). It retrieves tenders based on your preferences, filters them by market type, and identifies relevant opportunities using keyword matching. Who is this for? Companies seeking French public procurement opportunities Consultants monitoring specific market sectors Organizations tracking government contracts in France What it does The workflow operates in two main phases: Phase 1: Automated Tender Collection Retrieves all tenders from the BOAMP API based on your configuration Filters by market type (Works, Services, Supplies) Stores complete tender data in Google Sheets Handles pagination automatically for large datasets Phase 2: Intelligent Keyword Filtering Downloads and extracts text from tender PDF documents Searches for your specified keywords within tender content Saves matching tenders to a separate "Target" sheet for easy review Tracks processing status to avoid duplicates Requirements n8n instance (self-hosted or cloud) Google account with Google Sheets access Google Sheets API credentials configured in n8n Setup Instructions Step 1: Duplicate the Configuration Spreadsheet Access the template spreadsheet: Configuration Template Click File → Make a copy Save to your Google Drive Note the URL of your new spreadsheet Step 2: Configure Your Preferences Open your copied spreadsheet and configure the Config tab: Market Types - Check the categories you want to monitor: Travaux (Works/Construction) Services Fournitures (Supplies) Search Period - Enter the number of days to look back (e.g., "30" for the last 30 days) Keywords - Enter your search terms as a comma-separated list (e.g., "informatique, cloud, cybersécurité") Step 3: Import the Workflow Copy the workflow JSON from this template In n8n, click Workflows → Import from File/URL Paste the JSON and import Step 4: Update Google Sheets Connections Replace all Google Sheets node URLs with your spreadsheet URL: Nodes to update: Get config (2 instances) Get keyword Get Offset Get All Append row in sheet Update offset Reset Offset Ok Target offre For each node: Open the node settings Update the Document ID field with your spreadsheet URL Verify the Sheet Name matches your spreadsheet tabs Step 5: Configure Schedule Triggers The workflow has two schedule triggers: Schedule Trigger1 (Phase 1 - Tender Collection) Default: 0 8 1 * * (1st day of month at 8:00 AM) Adjust based on how frequently you want to collect tenders Schedule Trigger (Phase 2 - Keyword Filtering) Default: 0 10 1 * * (1st day of month at 10:00 AM) Should run after Phase 1 completes To modify: Open the Schedule Trigger node Click Cron Expression Adjust timing as needed Step 6: Test the Workflow Manually execute Phase 1 by clicking the Schedule Trigger1 node and selecting Execute Node Verify tenders appear in your "All" sheet Execute Phase 2 by triggering the Schedule Trigger node Check the "Target" sheet for matching tenders How the Workflow Works Phase 1: Tender Collection Process Configuration Loading - Reads your preferences from Google Sheets Offset Management - Tracks pagination position for API calls API Request - Fetches up to 100 tenders per batch from BOAMP Market Type Filtering - Keeps only selected market categories Data Storage - Formats and saves tenders to the "All" sheet Pagination Loop - Continues until all tenders are retrieved Offset Reset - Prepares for next execution Phase 2: Keyword Matching Process Keyword Loading - Retrieves search terms from configuration Tender Retrieval - Gets unprocessed tenders from "All" sheet Sequential Processing - Loops through each tender individually PDF Extraction - Downloads and extracts text from tender documents Keyword Analysis - Searches for matches with accent/case normalization Status Update - Marks tender as processed Match Evaluation - Determines if keywords were found Target Storage - Saves relevant tenders with match details Customization Options Adjust API Parameters In the HTTP Request node, you can modify: limit: Number of records per batch (default: 100) Additional filters in the where parameter Modify Keyword Matching Logic Edit the Get query node to adjust: Text normalization (accent removal, case sensitivity) Match proximity requirements Context length around matches Change Data Format Update the Format Results node to modify: Date formatting PDF URL generation Field mappings Spreadsheet Structure Your Google Sheets should contain these tabs: Config** - Your configuration settings Offset** - Pagination tracking (managed automatically) All** - Complete tender database Target** - Filtered tenders matching your keywords Troubleshooting No tenders appearing in "All" sheet: Verify your configuration period isn't too restrictive Check that at least one market type is selected Ensure API is accessible (test the HTTP Request node) PDF extraction errors: Some PDFs may be malformed or protected Check the URL generation in Format Results node Verify PDF URLs are accessible in a browser Duplicate tenders in Target sheet: Ensure the "Ok" status is being written correctly Check the Filter node is excluding processed tenders Verify row_number matching in update operations Keywords not matching: Keywords are case-insensitive and accent-insensitive Verify your keywords are spelled correctly Check the extracted text contains your terms Performance Considerations Phase 1 processes 100 tenders per iteration with a 10-second wait between batches Phase 2 processes tenders sequentially to avoid overloading PDF extraction Large datasets (1000+ tenders) may take significant time to process Consider running Phase 1 less frequently if tender volume is manageable Data Privacy All data is stored in your Google Sheets No external databases or third-party storage BOAMP API is publicly accessible (no authentication required) Ensure your Google Sheets permissions are properly configured Support and Updates This workflow retrieves data from the BOAMP public API. If API structure changes, nodes may require updates. Monitor the workflow execution logs for errors and adjust accordingly.
by gotoHuman
Collaborate with an AI Agent on a joint document, e.g. for creating your content marketing strategy, a sales plan, project status updates, or market analysis. The AI Agent generates markdown text that you can review and edit it in gotoHuman, and only then is the existing Google Doc updated. In this example we use AI to update our company's content strategy for the next quarter. How It Works The AI Agent has access to other documents that provide enough context to write the content strategy. We ask it to generate the text in markdown format. To ensure our strategy document is not changed without our approval, we request a human review using gotoHuman. There the markdown content can be edited and properly previewed. Our workflow resumes once the review is completed. We check if the content was approved and then write the (potentially edited) markdown to our Google Docs file via the Google Drive node. How to set up Most importantly, install the verified gotoHuman node before importing this template! (Just add the node to a blank canvas before importing. Works with n8n cloud and self-hosted) Set up your credentials for gotoHuman, OpenAI, and Google Docs/Drive In gotoHuman, select and create the pre-built review template "Strategy agent" or import the ID: F4sbcPEpyhNKBKbG9C1d Select this template in the gotoHuman node Requirements You need accounts for gotoHuman (human supervision) OpenAI (Doc writing) Google Docs/Drive How to customize Let the workflow run on a schedule, or create and connect a manual trigger in gotoHuman that lets you capture additional human input to feed your agent Provide the agent with more context to write the content strategy Use the gotoHuman response (or a Google Drive file change trigger) to run additional AI agents that can execute on the new strategy
by Atta
This workflow automatically turns any YouTube video into a structured blog post with Gemini AI. By sending a simple POST request with a YouTube URL to a webhook, it downloads the video’s audio, transcribes the content, and generates a blog-ready article with a title, description, tags, and category. The final result, along with the full transcript and original video URL, is delivered to your chosen webhook or CMS. How it works: The workflow handles the entire process of transforming YouTube videos into complete blog posts using Gemini AI transcription and structured text generation. Once triggered, it: Downloads the video’s audio Transcribes the spoken content into text Generates a blog post in the same language as the video’s original language Creates: A clear and engaging title A short description Suggested category and tags The full transcript of the video The original YouTube video URL This makes it easy to repurpose video content into publish-ready articles in minutes. This template is ideal for content creators, marketers, educators, and bloggers who want to quickly turn video content into written posts without manual transcription or editing. Setup Instructions Install yt-dlp on your local machine or server where n8n runs. This is required to download YouTube audio. Get a Google Gemini API key and configure it in your AI nodes. Webhook Input Configuration: Endpoint: The workflow starts with a Webhook Trigger. Method: POST Example Request Body: { "videoUrl": "https://www.youtube.com/watch?v=lW5xEm7iSXk" } Configure Output Webhook: Add your target endpoint in the last node where the blog post JSON is sent. This could be your CMS, a Notion database, or another integration. Customization Guidance Writing Style:** Update the AI Agent’s prompt to adjust tone (e.g., casual, professional, SEO-optimized). Metadata:** Modify how categories and tags are generated to fit your website’s taxonomy. Integration:** Swap the final webhook with WordPress, Ghost, Notion, or Slack to fit your publishing workflow. Transcript Handling:** Save the full transcript separately if you also want searchable video archives.
by Khairul Muhtadin
The Prompt converter workflow tackles the challenge of turning your natural language video ideas into perfectly formatted JSON prompts tailored for Veo 3 video generation. By leveraging Langchain AI nodes and Google Gemini, this workflow automates and refines your input to help you create high-quality videos faster and with more precision—think of it as your personal video prompt translator that speaks fluent cinematic! 💡 Why Use Prompt Converter? Save time: Automate converting complex video prompts into structured JSON, cutting manual formatting headaches and boosting productivity. Avoid guesswork: Eliminate unclear video prompt details by generating detailed, cinematic descriptions that align perfectly with Veo 3 specs. Improve output quality: Optimize every parameter for Veo 3's video generation model to get realistic and stunning results every time. Gain a creative edge: Turn vague ideas into vivid video concepts with AI-powered enhancement—your video project's secret weapon. ⚡ Perfect For Video creators: Content developers wanting quick, precise video prompt formatting without coding hassles. AI enthusiasts: Developers and hobbyists exploring Langchain and Google Gemini for media generation. Marketing teams: Professionals creating video ads or visuals who need consistent prompt structuring that saves time. 🔧 How It Works ⏱ Trigger: User submits a free text prompt via message or webhook. 📎 Process: The text goes through an AI model that understands and reworks it into detailed JSON parameters tailored for Veo 3. 🤖 Smart Logic: Langchain nodes parse and optimize the prompt with cinematic details, set reasonable defaults, and structure the data precisely. 💌 Output: The refined JSON prompt is sent to Google Gemini for video generation with optimized settings. 🔐 Quick Setup Import the JSON file to your n8n instances Add credentials: Azure OpenAI, Gemini API, OpenRouter API Customize: Adjust prompt templates or default parameters in the Prompt converter node Test: Run your workflow with sample text prompts to see videos come to life 🧩 You'll Need Active n8n instances Azure OpenAI API Gemini API Key OpenRouter API (alternative AI option) 🛠️ Level Up Ideas Add integration with video hosting platforms to auto-upload generated videos 🧠 Nodes Used Prompt Input** (Chat Trigger) OpenAI** (Azure OpenAI GPT model) Alternative** (OpenRouter API) Prompt converter** (Langchain chain LLM for JSON conversion) JSON parser** (structured output extraction) Generate a video** (Google Gemini video generation) Made by: Khaisa Studio Tags: video generation, AI, Langchain, automation, Google Gemini Category: Video Production Need custom work? Contact me
by WeblineIndia
Fill iOS localization gaps from .strings → Google Sheets and PR with placeholders (GitHub) This n8n workflow automatically identifies missing translations in .strings files across iOS localizations (e.g., Base.lproj vs fr.lproj) and generates a report in Google Sheets. Optionally, it creates a GitHub PR to insert placeholder strings ("TODO_TRANSLATE") so builds don't fail. Supports DRY\_RUN mode. Who’s it for iOS teams who want fast feedback on missing translations. Localization managers who want a shared sheet to assign work to translators. How it works A GitHub Webhook triggers on push or pull request. The iOS repo is scanned for .strings files under Base.lproj or en.lproj and their target-language counterparts. It compares keys and identifies what’s missing. A new or existing Google Sheet tab (e.g., fr) is updated with missing entries. If enabled, it creates a GitHub PR with placeholder keys (e.g., "TODO_TRANSLATE"). How to set up Import the Workflow JSON into your n8n instance. Set Config Node values like: { "GITHUB_OWNER": "your-github-user-name", "GITHUB_REPO": "your-iOS-repo-name", "BASE_BRANCH": "develop", "SHEET_ID": "<YOUR_GOOGLE_SHEET_ID>", "ENABLE_PR": "true", "IOS_SOURCE_GLOB": "/Base.lproj/*.strings,/en.lproj/*.strings", "IOS_TARGET_GLOB": "*/.lproj/*.strings", "PLACEHOLDER_VALUE": "TODO_TRANSLATE", "BRANCH_TEMPLATE": "chore/l10n-gap-{{YYYYMMDD}}", } Create GitHub Webhook URL: https://your-n8n-instance/webhook/l10n-gap-ios Content-Type: application/json Trigger on: Push, Pull Request Connect credentials GitHub token with repo scope Google Sheets API (Optional) Slack OAuth + SMTP Requirements | Tool | Needed For | Notes | | ---------------- | -------------------- | ---------------------------------------- | | GitHub Repo | Webhook, API for PRs | repo token or App | | Google Sheets | Sheet output | Needs valid SHEET_ID or create-per-run | | Slack (optional) | Notifications | chat:write scope | | SMTP (optional) | Email fallback | Standard SMTP creds | How to customize Multiple Locales**: Add comma-separated values to TARGET_LANGS_CSV (e.g., fr,de,es). Globs**: Adjust IOS_SOURCE_GLOB and IOS_TARGET_GLOB to scan only certain modules or file patterns. Ignore Rules**: Add IGNORE_KEY_PREFIXES_CSV to skip certain internal/debug strings. Placeholder Value**: Change PLACEHOLDER_VALUE to something meaningful like "@@@". Slack/Email**: Set SLACK_CHANNEL and EMAIL_FALLBACK_TO_CSV appropriately. DRY\_RUN**: Set to true to skip GitHub PR creation but still update the sheet. Add‑ons Android support:** Add a second path for strings.xml (values → values-<lang>), same diff → Sheets → placeholder PR. Multiple languages at once:** Expand TARGET_LANGS_CSV and loop tabs + placeholder commits per locale. .stringsdict handling:** Validate plural/format entries and open a precise PR. Translator DMs:** Provide a LANG → Slack handle/email map to DM translators with their specific file/key counts. GitLab/Bitbucket variants:** Replace GitHub API calls with GitLab/Bitbucket equivalents to open Merge Requests. Use Case Examples Before a test build, ensure fr has all keys present—placeholders keep the app compiling. Weekly run creates a single sheet for translators and a PR with placeholders, avoiding last‑minute breakages. A new screen adds 12 strings; the bot flags and pre‑fills them across locales. Common troubleshooting | Issue | Possible Cause | Solution | | ------------------------ | --------------------------------------------- | ------------------------------------------------------ | | No source files found | Glob doesn't match Base.lproj or en.lproj | Adjust IOS_SOURCE_GLOB | | Target file missing | fr.lproj doesn’t exist yet | Will be created in placeholder PR | | Parsing skips entries | Non-standard string format in file | Ensure proper .strings format "key" = "value"; | | Sheet not updating | SHEET_ID missing or insufficient permission | Add valid ID or allow write access | | PR not created | ENABLE_PR=false or no missing keys | Enable PR and ensure at least one key is missing | | Slack/Email not received | Missing credentials or config | Configure Slack/SMTP properly and set recipient fields | Need Help? Want to expand this for Android? Loop through 5+ locales at once? Or replace GitHub with GitLab? Contact our n8n Team at WeblineIndia with your repo & locale setup and we’ll help tailor it to your translation workflow!
by AFK Crypto
Try It Out! 🚀 Reddit Crypto Intelligence & Market Spike Detector ⸻ 🧠 Workflow Description Reddit Crypto Intelligence & Market Spike Detector is an automated market sentiment and price-monitoring workflow that connects social chatter with real-time crypto price analytics. It continuously scans new posts from r/CryptoCurrency, extracts recently mentioned coins, checks live price movements via CoinGecko, and alerts you on Discord when a significant spike or drop occurs. This automation empowers traders, analysts, and communities to spot early market trends before they become mainstream — all using free APIs and open data. ⸻ ⚙️ How It Works Monitor Reddit Activity ◦ Automatically fetches the latest posts from r/CryptoCurrency using Reddit’s free RSS feed. ◦ Captures trending titles, post timestamps, and mentions of coins or tokens (e.g., $BTC, $ETH, $SOL, $PEPE). Extract Coin Mentions ◦ A Code Node parses the feed using regex (\$[A-Za-z0-9]{2,10}) to identify any symbols or tickers discussed. ◦ Removes duplicates and normalizes all results for accurate data mapping. Fetch Market Data ◦ Each detected coin symbol is matched with CoinGecko’s public API to fetch live market data, including current price, market rank, and 24-hour price change. ◦ No API key required — completely free and reliable source. Detect Market Movement ◦ A second Code Node filters the fetched data to identify price movements greater than ±5% within the last 24 hours. ◦ This helps isolate meaningful market action from routine fluctuations. Generate and Send Alerts ◦ When a spike or dip is detected, the workflow composes a rich alert message including: ▪ 💎 Coin name and symbol ▪ 💰 Current price ▪ 📈 24h percentage change ▪ 🕒 Timestamp of detection ◦ The message is sent automatically to your Discord channel using a preconfigured webhook. ⸻ 💬 Example Output 🚨 Crypto Reddit Mention & Price Spike Alert! 🚨 💎 ETHEREUM (ETH) 💰 $3,945.23 📈 Change: +6.12% 💎 SOLANA (SOL) 💰 $145.88 📈 Change: +8.47% 🕒 Checked at: 2025-10-31T15:00:00Z If no coins cross the ±5% threshold: “No price spikes detected in the latest Reddit check.” 🔔 #MarketIntel #CryptoSentiment #PriceAlert ⸻ 🪄 Key Features • 🧠 Social + Market Intelligence – Combines Reddit sentiment with live market data to detect potential early signals. • 🔎 Automated Coin Detection – Dynamically identifies newly discussed tokens from live posts. • 📊 Smart Spike Filtering – Highlights only meaningful movements above configurable thresholds. • 💬 Discord Alerts – Delivers clear, structured, and timestamped alerts to your community automatically. • ⚙️ Fully No-Cost Stack – Utilizes free Reddit and CoinGecko APIs with no authentication required. ⸻ 🧩 Use Cases • Crypto Traders: Detect early hype or momentum shifts driven by social chatter. • Analysts: Automate social sentiment tracking tied directly to live market metrics. • Community Managers: Keep members informed about trending coins automatically. • Bots & AI Assistants: Integrate this logic to enhance automated trading signals or alpha alerts. ⸻ 🧰 Required Setup • Discord Webhook URL – For automatic alert posting. • (Optional) CoinGecko API endpoint (no API key required). • n8n Instance – Self-hosted or Cloud; free tier is sufficient. • Workflow Schedule – Recommended: hourly (Cron Node interval = 1 hour). ⸻ AFK Crypto Website: afkcrypto.com
by Ms. Phuong Nguyen (phuongntn)
An AI Recruiter that screens, scores, and ranks candidates in minutes — directly inside n8n. 🧠 Overview An AI-powered recruiter workflow that compares multiple candidate CVs with a single Job Description (JD). It analyzes text content, calculates fit scores, identifies strengths and weaknesses, and provides automated recommendations. ⚙️ How it works 🔹 Webhook Trigger – Upload one Job Description (JD) and multiple CVs (PDF or text) 🔹 File Detector – Auto-identifies JD vs CV 🔹 Extract & Merge – Reads text and builds candidate dataset 🔹 🤖 AI Recruiter Agent – Compares JD & CVs → returns Fit Score, Strengths, Weaknesses, and Recommendation 🔹 📤 Output Node – Sends structured JSON or summary table for HR dashboards or Chat UI Example: Upload JD.pdf + 3 candidate CVs → get instant JSON report with top match and recommendations. 🧩 Requirements OpenAI or compatible AI Agent connection (no hardcoded API keys). Input files in PDF or text format (English or Vietnamese supported). n8n Cloud or Self-Hosted v1.50+ with AI Agent nodes enabled. 🔸 “OpenAI API Key or n8n AI Agent credential required” 🧱 Customizing this workflow Swap the AI model with Gemini, Claude, or another LLM. Add a Google Sheets export node to save results. Connect to SAP HR or internal employee APIs. Adjust scoring logic or include additional attributes (experience, skills, etc.). 👩💼 Author https://www.linkedin.com/in/nguyen-phuong-17a71a147/ Empowering HR through intelligent, data-driven recruitment.
by AI/ML API | D1m7asis
Who’s it for Teams and makers who want a plug-and-play vision bot: users send a photo in Telegram, the bot returns a concise description plus OCR text. No custom servers required—just n8n, a Telegram bot, and an AIMLAPI key. What it does / How it works The workflow listens for new Telegram messages, fetches the highest-resolution photo, converts it to base64, normalizes the MIME type, and calls AIMLAPI (GPT-4o Vision) via the HTTP Request node using the OpenAI-compatible messages format with an image_url data URI. The model returns a short caption and extracted text. The answer is sent back to the same Telegram chat. Requirements n8n instance (self-hosted or cloud) Telegram bot token (from @BotFather) AIMLAPI account and API key (OpenAI-compatible endpoint) How to set up Create a Telegram bot with @BotFather and copy the token. In n8n, add Telegram credentials (no hardcoded tokens in nodes). Add AIMLAPI credentials with your API key (base URL: https://api.aimlapi.com/v1). Import the workflow JSON and connect credentials in the nodes. Execute the trigger and send a photo to your bot to test. How to customize the workflow Modify the vision prompt (e.g., add brand, language, or formatting rules). Switch models within AIMLAPI (any vision-capable model using the same messages schema). Add an IF branch for text-only messages (reply with guidance). Log usage to Google Sheets or a database (user id, file id, response). Add rate limits, user allowlists, or Markdown formatting in Telegram responses. Increase timeouts/retries in the HTTP Request node for long-running images.