by Le Nguyen
Description (How it works) This workflow keeps your Zalo Official Account access token valid and easy to reuse across other flows—no external server required. High-level steps Scheduled refresh runs on an interval to renew the access token before it expires. Static Data cache (global) stores access/refresh tokens + expiries for reuse by any downstream node. OAuth exchange calls Zalo OAuth v4 with your app_id and secret_key to get a fresh access token. Immediate output returns the current access token to the next nodes after each refresh. Operational webhooks include: A reset webhook to clear the cache when rotating credentials or testing. A token peek webhook to read the currently cached token for other services. Setup steps (estimated time ~8–15 minutes) Collect Zalo credentials (2–3 min): Obtain app_id, secret_key, and a valid refresh_token. Import & activate workflow (1–2 min): Import the JSON into n8n and activate it. Wire inputs (2–3 min): Point the “Set Refresh Token and App ID” node to your env vars (or paste values for a quick test). Adjust schedule & secure webhooks (2–3 min): Tune the run interval to your token TTL; protect the reset/peek endpoints (e.g., secret param or IP allowlist). Test (1–2 min): Execute once to populate Static Data; optionally try the token peek and reset webhooks to confirm behavior.
by Ali Khosravani
This workflow automatically generates natural product comments using AI and posts them to your WooCommerce store. It helps boost engagement and makes product pages look more active and authentic. How It Works Fetches all products from your WooCommerce store. Builds an AI prompt based on each product’s name and description. Uses OpenAI to generate a short, human-like comment (neutral, positive, negative, or questioning). Assigns a random reviewer name and email. Posts the comment back to WooCommerce as a product review. Requirements n8n version: 1.49.0 or later (recommended). Active OpenAI API key. WooCommerce installed and REST API enabled. WordPress API credentials (Consumer Key & Consumer Secret). Setup Instructions Import this workflow into n8n. Add your credentials in n8n > Credentials: OpenAI API (API key). WooCommerce API (consumer key & secret). Replace the sample URL https://example.com with your own WordPress/WooCommerce site URL. Execute manually or schedule it to run periodically. Categories AI & Machine Learning WooCommerce WordPress Marketing Engagement Tags ai, openai, woocommerce, comments, automation, reviews, n8n > Note: AI-generated comments should be reviewed periodically to ensure they align with your store’s policies and brand voice.
by Luca Olovrap
How it works This workflow provides a complete, automated backup solution for your n8n instance, running on a daily schedule to ensure your automations are always safe. Automatic cleanup:** It first connects to your Google Drive to find and delete old backup folders, keeping your storage clean and organized based on a retention number you set. Daily folder creation:** It then creates a new, neatly dated folder to store the current day's backup. Fetches & saves workflows:** Finally, it uses the n8n API to get a list of all your workflows, converts each one to a .json file, and uploads them to the newly created folder in Google Drive. Set up steps Setup time: ~3 minutes This template is designed to be as plug-and-play as possible. All configurations are grouped in a single node for quick setup. Connect your accounts:** Authenticate the Google Drive and n8n API nodes with your credentials. Configure main settings:* Open the Set node named *"CONFIG - Set your variables here"** and: Paste the ID of your main Google Drive folder where backups will be stored. Adjust the number of recent backups you want to keep. Activate workflow:** Turn on the workflow. Your automated backup system is now active. For more detailed instructions, including how to find your Google Drive folder ID, please refer to the sticky notes inside the workflow.
by Elodie Tasia
Automatically create branded social media graphics, certificates, thumbnails, or marketing visuals using Bannerbear's template-based image generation API. Bannerbear's API is primarily asynchronous by default: this workflow shows you how to use both asynchronous (webhook-based) and synchronous modes depending on your needs. What it does This workflow connects to Bannerbear's API to generate custom images based on your pre-designed templates. You can modify text, colors, and other elements programmatically. By default, Bannerbear works asynchronously: you submit a request, receive an immediate 202 Accepted response, and get the final image via webhook or polling. This workflow demonstrates both the standard asynchronous approach and an alternative synchronous method where you wait for the image to be generated before proceeding. How it works Set parameters - Configure your Bannerbear API key, template ID, and content (title, subtitle) Choose mode - Select synchronous (wait for immediate response) or asynchronous (standard webhook delivery) Generate image - The workflow calls Bannerbear's API with your modifications Receive result - Get the image URL, dimensions, and metadata in PNG or JPG format Async mode (recommended): The workflow receives a pending status immediately, then a webhook delivers the completed image when ready. Sync mode: The workflow waits for the image generation to complete before proceeding. Setup requirements A Bannerbear account (free tier available) A Bannerbear template created in your dashboard Your API key and template ID from Bannerbear For async mode: ability to receive webhooks (production n8n instance) How to set up Get Bannerbear credentials: Sign up at bannerbear.com Create a project and design a template Copy your API key from Settings > API Key Copy your template ID from the API Console Configure the workflow: Open the "SetParameters" node Replace the API key and template ID with yours Customize the title and subtitle text Set call_mode to "sync" or "async" For async mode (recommended): Activate the "Webhook_OnImageCreated" node Copy the production webhook URL Add it to Bannerbear via Settings > Webhooks > Create a Webhook Set event type to "image_created" Customize the workflow Modify the template parameters to match your Bannerbear template fields Add additional modification objects for more dynamic elements (colors, backgrounds, images) Connect to databases, CRMs, or other tools to pull content automatically Chain multiple image generations for batch processing Store generated images in Google Drive, S3, or your preferred storage Use async mode for high-volume generation without blocking your workflow
by Harsh Maniya
✅💬Build Your Own WhatsApp Fact-Checking Bot with AI Tired of misinformation spreading on WhatsApp? 🤨 This workflow transforms your n8n instance into a powerful, automated fact-checking bot\! Send any news, claim, or question to a designated WhatsApp number, and this bot will use AI to research it, provide a verdict, and send back a summary with direct source links. Fight fake news with the power of automation and AI\! 🚀 How it works ⚙️ This workflow uses a simple but powerful three-step process: 📬 WhatsApp Gateway (Webhook node): This is the front door. The workflow starts when the Webhook node receives an incoming message from a user via a Twilio WhatsApp number. 🕵️ The Digital Detective (Perplexity node): The user's message is sent to the Perplexity node. Here, a powerful AI model, instructed by a custom system prompt, analyzes the claim, scours the web for reliable information, and generates a verdict (e.g., ✅ Likely True, ❌ Likely False). 📲 WhatsApp Reply (Twilio node): The final, formatted response, complete with the verdict, a simple summary, and source citations, is sent back to the original user via the Twilio node. Setup Guide 🛠️ Follow these steps carefully to get your fact-checking bot up and running. Prerequisites A Twilio Account with an active phone number or access to the WhatsApp Sandbox. A Perplexity AI Account to get an API key. 1\. Configure Credentials You'll need to add API keys for both Perplexity and Twilio to your n8n instance. Perplexity AI: Go to your Perplexity AI API Settings. Generate and copy your API Key. In n8n, go to Credentials \& New, search for "Perplexity," and add your key. Twilio: Go to your Twilio Console Dashboard. Find and copy your Account SID and Auth Token. In n8n, go to Credentials \& New, search for "Twilio," and add your credentials. 2\. Set Up the Webhook and Tunnel To allow Twilio's cloud service to communicate with your n8n instance, you need a public URL. The n8n tunnel is perfect for this. Start the n8n Tunnel: If you are running n8n locally, you'll need to expose it to the web. Open your terminal and run: n8n start --tunnel Copy Your Webhook URL: Once the tunnel is active, open your n8n workflow. In the Receive Whatsapp Messages (Webhook) node, you will see two URLs: Test and Production. Copy the Test/Production URL. This is the public URL that Twilio will use. 3\. Configure Your Twilio WhatsApp Sandbox Go to the Twilio Console and navigate to Messaging \& Try it out \& Send a WhatsApp message. Select the Sandbox Settings tab. In the section "WHEN A MESSAGE COMES IN," paste your n8n Production Webhook URL. Make sure the method is set to HTTP POST. Click Save. How to Use Your Bot 🚀 Activate the Sandbox: To start, you (and any other users) must send a WhatsApp message with the join code (e.g., join given-word) to your Twilio Sandbox number. Twilio provides this phrase on the same Sandbox page. Fact-Check Away\! Once joined, simply send any claim or question to the Twilio number. For example: Did Elon Musk discover a new planet? Within moments, the workflow will trigger, and you'll receive a formatted reply with the verdict and sources right in your chat\! Further Reading & Resources 🔗 n8n Tunnel Documentation Twilio for WhatsApp Quickstart Perplexity AI API Documentation
by WeblineIndia
Sync Android drawable assets from Figma to GitHub via PR (multi‑density PNG) This n8n workflow automatically fetches design assets (icons, buttons) from Figma, exports them into Android drawable folder formats based on resolution (e.g., mdpi, hdpi, etc.) and commits them to a GitHub branch, creating a Pull Request with all updates. Who’s it for Android / Flutter developers** managing multiple screen densities. Design + Dev teams** wanting to automate asset delivery from Figma to codebase. Mobile teams** tired of manually exporting assets, resizing, organizing and uploading to GitHub. How it works Execute Flow manually or via trigger. Fetches all export URLs from a Figma file. Filters out only relevant components (Icon, Button). Prepares Android drawable folders for each density. Merges components with folder mapping. Calls Figma export API to get image URLs. Filters out empty/invalid URLs. Downloads all images as binary. Merges images with metadata. Renames and adjusts file names if needed. Prevents duplicate PRs using conditional checks. Commits files and opens a GitHub Pull Request. How to set up Set up your Figma token (with file access) Get Figma File Key and desired parent node ID Connect your GitHub account in n8n Prepare a GitHub branch for uploading assets Add your drawable folders config Adjust file naming logic as per your code style Run the workflow Requirements | Tool | Purpose | |------------------|-------------------------------------------| | Figma API Token | To fetch assets and export URLs | | GitHub Token | To commit files and open PR | | n8n | Workflow automation engine | | Figma File Key | Target design file | | Node Names | Named like Icon, Button | How to customize Add more component types** to extract (e.g., Avatar, Chip) Change drawable folder structure** for other platforms (iOS, Web) Add image optimization** before commit Switch from branch PR to direct commit** if preferred Add CI triggers** (e.g., Slack notifications or Jenkins trigger post-PR) Add‑ons Slack Notification Node Commit summary to CHANGELOG.md Image format conversion (e.g., SVG → PNG, PNG → WebP) Auto-tag new versions based on new asset count Use Case Examples Auto-export design changes as Android-ready assets Designers upload icons in Figma → Devs get PR with ready assets Maintain pixel-perfect assets per density without manual effort Integrate this into weekly design-dev sync workflows Common Troubleshooting | Issue | Possible Cause | Solution | |-----------------------------------|---------------------------------------------------|------------------------------------------------------------------------------| | Export URL is null | Figma node has no export settings | Add export settings in Figma for all components | | Images not appearing in PR | Merge or file name logic is incorrect | Recheck merge nodes, ensure file names include extensions | | Duplicate PR created | Condition node not properly checking branch | Update condition to check existing PR or use unique branch name | | Figma API returns 403/401 | Invalid or expired Figma token | Regenerate token and update n8n credentials | | GitHub file upload fails | Wrong path or binary input mismatch | Ensure correct folder structure (drawable-mdpi, etc.) and valid binary | | Assets missing certain resolutions| Not all resolutions exported in Figma | Export all densities in Figma, or fallback to default | Need Help? If you’d like help setting up, customizing or expanding this flow, feel free to reach out to our n8n automation team at WeblineIndia! We can help you: Fine-tune Figma queries Improve file renaming rules Integrate Slack / CI pipelines Add support for other platforms (iOS/Web) Happy automating!
by Pake.AI
Overview This workflow extracts text from Instagram images by combining HikerAPI and OCR.Space. You can use it to collect text data from single posts or carousels, analyze visual content, or repurpose insights without manual copying. The process is fully automated inside N8N and helps marketers, researchers, and teams gather Instagram text quickly. How it works Takes an Instagram post URL, either a single post or a carousel Retrieves media data using the HikerAPI Get Media endpoint Detects the post type, whether single feed, carousel, or reel For single posts, sends the image to OCR.Space for text extraction For carousels, loops through each slide and extracts text from every image Merges all parsed results into one raw text output Use cases Collecting text data from Instagram images for research Extracting visual insights for marketing analysis Repurposing creator content without manual transcription Helping marketers, agencies, and researchers identify message patterns in visual posts Prerequisites HikerAPI account with access to the Instagram media endpoint OCR.Space API key for image text extraction A valid Instagram post URL N8N instance capable of running HTTP requests and looping through items Set up steps Prepare your API keys for HikerAPI and OCR.Space Insert both API keys into their respective HTTP Request nodes Paste the Instagram post URL into the IGPost URL node Run the workflow to generate raw text extracted from Instagram images Check the sticky notes inside the workflow for additional guidance Made by @fataelislami https://pake.ai
by Alysson Neves
Internet Search Chat with Firecrawl How it works A user sends a query via the chat widget and the Chat Trigger captures the message. The chat flow posts the query to the backend webhook (HTTP Request) which forwards it to the search service. The webhook calls Firecrawl to run the web search and returns raw results. A formatter converts the raw results into concise Markdown blocks and separators. The chat node sends the formatted search summary back to the user. Optional: an admin can manually trigger a credits check to review Firecrawl usage. Setup [ ] Add Firecrawl API credentials in n8n. [ ] Update the webhook URL in the "Define constants" node to your n8n instance URL. [ ] Configure and enable the Chat Trigger (make it public and set initial messages). [ ] Ensure the webhook node path matches the constant and is reachable from the chat node. [ ] Test the chat by sending a sample query and verify the formatted search results. [ ] (Optional) Run the manual "Check credits" trigger to monitor Firecrawl account usage.
by Miki Arai
Who is this for Anime Enthusiasts:** Users who want to automate their watchlists based on specific voice actors or creators. n8n Learners:** Anyone looking for a best-practice example of handling API rate limiting, loops, and data filtering. Calendar Power Users:** People who want to populate their personal schedule with external data sources automatically. What it does Search:** Queries the Jikan API for a specific person (e.g., Voice Actor "Mamoru Miyano"). Wait:** Pauses execution to respect the API rate limit. Retrieve:** Fetches the list of anime roles associated with that person. Loop & Filter:** Iterates through the list one by one, fetches detailed status, and filters for shows marked as "Not yet aired." Schedule:** Creates an event in your Google Calendar using the anime's title and release date. Setup Steps Configure Search: Open the 'Search Voice Actor' node. In "Query Parameters," change the value of q to the name of the voice actor or person you want to track. Connect Calendar: Open the 'Create an event' node. Select your Google Calendar credentials and choose the Calendar ID where events should be created. Test: Run the workflow manually to populate your calendar. Requirements An active n8n instance. A Google account (for Google Calendar). No API Key is required for Jikan API (Public), but rate limiting logic must be preserved. How to customize Change the Filter:** Modify the 'Check if Not Aired' node to track "Currently Airing" shows instead of upcoming ones. Enrich Event Details:** Update the 'Create an event' node to include the MyAnimeList URL or synopsis in the calendar event description. Search Different Entities:** Adjust the API endpoint to search for Manga authors or specific Studios instead of voice actors. Expected Result Upon successful execution, this workflow will: Search for the specified voice actor. Retrieve their upcoming anime roles. Create events in your Google Calendar for anime that hasn't aired yet. Example Calendar Entry: Title:** Anime Name - Release Date Description:** Details regarding the release...
by as311
Regional Prospecting for registered Companies in Germany Find and qualify registered companies in specific regions using Implisense Search API (Handelsregister). This API provides all officially registered companies in Germany (about 2,5 million). Input Parameters: query: Search terms (e.g., "software OR it") regionCode: ZIP/postal code region (e.g., "de-10") industryCode: NACE industry code (e.g., "J62") pageSize: Max results (1-1000) Quality Levels: High:** Score ≥15 (active, website, full address) Medium:** Score <15 How it works Phase 1: Init Phase 2: Search Phase 3: Vetting Setup steps 1. Configure Credentials: Set up RapidAPI API credentials Create an account on RapidAPI (free tier available) Insert your RapidAPI x-rapidapi-key as password 2. Configure Search Parameters see above. 3. Connect CRM/Database After "Merge & Log Results" node, add: HTTP Request node for REST API Database node for direct insertion Or CRM-specific integration node
by Gwa Shark
Who For? Gamers who don't like bad and slow preformance playing games and want to find a good preformance based server near them. Setup? None! Modes Available Auto - Optimized (Recommended & Default) Ping - Finds the lowest ping Latency - Lowest ping & highest FPS
by Joel Cantero
YouTube Caption Extractor (Your Channel Only) Extracts clean transcripts from YOUR CHANNEL YouTube video captions using YouTube Data API v3. ⚠️ API Limitation: Only works with videos from YOUR OWN CHANNEL. Cannot access external/public videos. 🎯 Use Cases AI summarization & sentiment analysis Keyword extraction from your content Content generation from your videos Batch transcript processing 🔄 How It Works (6 Steps) 📥 Input: Receives videoId + preferredLanguage 🔍 API: Lists captions from your channel 🆔 Selector: Picks preferred language (fallback to first) 📥 Download: Gets VTT subtitle file 🧹 Cleaning: Removes timestamps, [Music], duplicates ✅ Output: Clean transcript + metadata 🚀 How to Use Trigger with JSON payload: {"youtubeVideoId": "YOUR_VIDEO_ID", "preferredLanguage": "es"} Video ID must belong to your authenticated YouTube channel** Works as sub-workflow (Execute Workflow Trigger) or replace with Webhook/Form trigger Handles videos with no captions gracefully with structured error response Output ready for downstream AI processing or storage ⚠️ Setup Required: Change YouTube credentials* in *"List Captions"* and *"Download VTT"** nodes Video ID from your authenticated channel Sub-workflow or Webhook trigger Graceful no-captions handling 🔧 Requirements ✅ YouTube OAuth2 (youtube.captions.read scope) ✅ Update credentials in List Captions + Download VTT nodes ✅ n8n HTTP Request + Code nodes 💬 Need Help? n8n Forum Happy Automating! 🎉