by Davide
This workflow is designed to analyze and humanize AI-generated text automatically using Rephrasy AI. The workflow then sends this text to an AI detection service to evaluate how “AI-like” it is. The text is broken down into individual sentences, and each sentence is scored based on its likelihood of being AI-generated. Using a conditional logic step, the workflow identifies sentences with a high AI probability (score > 50). Only those sentences are sent to a humanization API, which rewrites them in a more natural, human-like style. Meanwhile, sentences that are already considered human are left unchanged. Finally, all processed sentences are recombined into a single output text. 👉 In short, the workflow: Detects AI-generated content Filters only the parts that need improvement Rewrites them to sound more human Reassembles the final optimized text Key Benefits 1. ✅ Selective Humanization Instead of rewriting the entire text, the workflow only modifies sentences that are likely AI-generated. This preserves natural content and reduces unnecessary changes. 2. ✅ Cost Optimization By processing only flagged sentences through the humanizer API, it minimizes API usage and costs. 3. ✅ Improved Text Quality The final output sounds more natural and less robotic, improving readability and engagement. 4. ✅ Sentence-Level Precision The workflow analyzes text at a granular level, allowing highly targeted improvements rather than bulk rewriting. 5. ✅ Automation & Scalability Once set up, the process is fully automated and can handle large volumes of text without manual intervention. 6. ✅ Multi-language Support The workflow supports multiple languages (even though the humanization step is set to English), making it adaptable to different use cases. 7. ✅ Maintain Structure & Meaning Since only specific sentences are modified, the original structure and intent of the text remain intact. How it works This workflow is designed to process a block of text, detect which sentences are likely AI-generated, and rewrite only those sentences to sound more human-like. Trigger and Input: The workflow starts manually. A "Set text" node allows the user to define the text to be processed (stored in the text field). AI Detection: The text is sent to the Rephrasy AI Detector API. The API returns an overall AI score and a breakdown (sentences) with individual scores for each sentence. Data Parsing: A "Code" node transforms the API's sentence data into a structured array, tagging each sentence as "ai" (if the score > 50) or "human". Splitting and Looping: The "Split Out" node separates the sentences into individual items. The "Loop Over Items" node then iterates over each sentence to decide if it needs processing. Conditional Humanization: For each sentence, the "> 50?" node (IF condition) checks the AI score. If the score is greater than 50 (indicating AI), the workflow routes the sentence to the AI Humanizer node to be rewritten. If the score is 50 or less (human), the sentence is passed through unchanged. Reassembly: Once the loop finishes, the "Aggregate Text" node collects all the sentences (both the rewritten AI sentences and the original human sentences) and reassembles them into a single, final text output. Set up steps To get this workflow running, you need to configure the connections to the Rephrasy API and define the input text. Configure Credentials: The workflow uses two nodes: AI Detector and AI Humanizer. Both require an HTTP Bearer Auth credential. You must create a credential in n8n named "Rephrasy" (or update the node settings to match your credential's name) and populate it with your valid Rephrasy API key. Set Input Text: Locate the Set text node. Modify the text assignment value. Currently, it contains a placeholder (XXX). Replace this with the text you wish to humanize, or configure it to reference data from a previous trigger (e.g., from a webhook or form). Review Language Settings: In the AI Humanizer node, check the language parameter inside the JSON body. It is currently set to "English". Ensure this matches the language of the text you are processing. Activate Workflow: Once the credentials are set and the input text is defined, toggle the workflow from Inactive to Active (or simply execute it manually using the "Execute Workflow" button if no trigger is needed). 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Le Nguyen
Description (How it works) This workflow keeps your Zalo Official Account access token valid and easy to reuse across other flows—no external server required. High-level steps Scheduled refresh runs on an interval to renew the access token before it expires. Static Data cache (global) stores access/refresh tokens + expiries for reuse by any downstream node. OAuth exchange calls Zalo OAuth v4 with your app_id and secret_key to get a fresh access token. Immediate output returns the current access token to the next nodes after each refresh. Operational webhooks include: A reset webhook to clear the cache when rotating credentials or testing. A token peek webhook to read the currently cached token for other services. Setup steps (estimated time ~8–15 minutes) Collect Zalo credentials (2–3 min): Obtain app_id, secret_key, and a valid refresh_token. Import & activate workflow (1–2 min): Import the JSON into n8n and activate it. Wire inputs (2–3 min): Point the “Set Refresh Token and App ID” node to your env vars (or paste values for a quick test). Adjust schedule & secure webhooks (2–3 min): Tune the run interval to your token TTL; protect the reset/peek endpoints (e.g., secret param or IP allowlist). Test (1–2 min): Execute once to populate Static Data; optionally try the token peek and reset webhooks to confirm behavior.
by Harsh Maniya
✅💬Build Your Own WhatsApp Fact-Checking Bot with AI Tired of misinformation spreading on WhatsApp? 🤨 This workflow transforms your n8n instance into a powerful, automated fact-checking bot\! Send any news, claim, or question to a designated WhatsApp number, and this bot will use AI to research it, provide a verdict, and send back a summary with direct source links. Fight fake news with the power of automation and AI\! 🚀 How it works ⚙️ This workflow uses a simple but powerful three-step process: 📬 WhatsApp Gateway (Webhook node): This is the front door. The workflow starts when the Webhook node receives an incoming message from a user via a Twilio WhatsApp number. 🕵️ The Digital Detective (Perplexity node): The user's message is sent to the Perplexity node. Here, a powerful AI model, instructed by a custom system prompt, analyzes the claim, scours the web for reliable information, and generates a verdict (e.g., ✅ Likely True, ❌ Likely False). 📲 WhatsApp Reply (Twilio node): The final, formatted response, complete with the verdict, a simple summary, and source citations, is sent back to the original user via the Twilio node. Setup Guide 🛠️ Follow these steps carefully to get your fact-checking bot up and running. Prerequisites A Twilio Account with an active phone number or access to the WhatsApp Sandbox. A Perplexity AI Account to get an API key. 1\. Configure Credentials You'll need to add API keys for both Perplexity and Twilio to your n8n instance. Perplexity AI: Go to your Perplexity AI API Settings. Generate and copy your API Key. In n8n, go to Credentials \& New, search for "Perplexity," and add your key. Twilio: Go to your Twilio Console Dashboard. Find and copy your Account SID and Auth Token. In n8n, go to Credentials \& New, search for "Twilio," and add your credentials. 2\. Set Up the Webhook and Tunnel To allow Twilio's cloud service to communicate with your n8n instance, you need a public URL. The n8n tunnel is perfect for this. Start the n8n Tunnel: If you are running n8n locally, you'll need to expose it to the web. Open your terminal and run: n8n start --tunnel Copy Your Webhook URL: Once the tunnel is active, open your n8n workflow. In the Receive Whatsapp Messages (Webhook) node, you will see two URLs: Test and Production. Copy the Test/Production URL. This is the public URL that Twilio will use. 3\. Configure Your Twilio WhatsApp Sandbox Go to the Twilio Console and navigate to Messaging \& Try it out \& Send a WhatsApp message. Select the Sandbox Settings tab. In the section "WHEN A MESSAGE COMES IN," paste your n8n Production Webhook URL. Make sure the method is set to HTTP POST. Click Save. How to Use Your Bot 🚀 Activate the Sandbox: To start, you (and any other users) must send a WhatsApp message with the join code (e.g., join given-word) to your Twilio Sandbox number. Twilio provides this phrase on the same Sandbox page. Fact-Check Away\! Once joined, simply send any claim or question to the Twilio number. For example: Did Elon Musk discover a new planet? Within moments, the workflow will trigger, and you'll receive a formatted reply with the verdict and sources right in your chat\! Further Reading & Resources 🔗 n8n Tunnel Documentation Twilio for WhatsApp Quickstart Perplexity AI API Documentation
by WeblineIndia
Sync Android drawable assets from Figma to GitHub via PR (multi‑density PNG) This n8n workflow automatically fetches design assets (icons, buttons) from Figma, exports them into Android drawable folder formats based on resolution (e.g., mdpi, hdpi, etc.) and commits them to a GitHub branch, creating a Pull Request with all updates. Who’s it for Android / Flutter developers** managing multiple screen densities. Design + Dev teams** wanting to automate asset delivery from Figma to codebase. Mobile teams** tired of manually exporting assets, resizing, organizing and uploading to GitHub. How it works Execute Flow manually or via trigger. Fetches all export URLs from a Figma file. Filters out only relevant components (Icon, Button). Prepares Android drawable folders for each density. Merges components with folder mapping. Calls Figma export API to get image URLs. Filters out empty/invalid URLs. Downloads all images as binary. Merges images with metadata. Renames and adjusts file names if needed. Prevents duplicate PRs using conditional checks. Commits files and opens a GitHub Pull Request. How to set up Set up your Figma token (with file access) Get Figma File Key and desired parent node ID Connect your GitHub account in n8n Prepare a GitHub branch for uploading assets Add your drawable folders config Adjust file naming logic as per your code style Run the workflow Requirements | Tool | Purpose | |------------------|-------------------------------------------| | Figma API Token | To fetch assets and export URLs | | GitHub Token | To commit files and open PR | | n8n | Workflow automation engine | | Figma File Key | Target design file | | Node Names | Named like Icon, Button | How to customize Add more component types** to extract (e.g., Avatar, Chip) Change drawable folder structure** for other platforms (iOS, Web) Add image optimization** before commit Switch from branch PR to direct commit** if preferred Add CI triggers** (e.g., Slack notifications or Jenkins trigger post-PR) Add‑ons Slack Notification Node Commit summary to CHANGELOG.md Image format conversion (e.g., SVG → PNG, PNG → WebP) Auto-tag new versions based on new asset count Use Case Examples Auto-export design changes as Android-ready assets Designers upload icons in Figma → Devs get PR with ready assets Maintain pixel-perfect assets per density without manual effort Integrate this into weekly design-dev sync workflows Common Troubleshooting | Issue | Possible Cause | Solution | |-----------------------------------|---------------------------------------------------|------------------------------------------------------------------------------| | Export URL is null | Figma node has no export settings | Add export settings in Figma for all components | | Images not appearing in PR | Merge or file name logic is incorrect | Recheck merge nodes, ensure file names include extensions | | Duplicate PR created | Condition node not properly checking branch | Update condition to check existing PR or use unique branch name | | Figma API returns 403/401 | Invalid or expired Figma token | Regenerate token and update n8n credentials | | GitHub file upload fails | Wrong path or binary input mismatch | Ensure correct folder structure (drawable-mdpi, etc.) and valid binary | | Assets missing certain resolutions| Not all resolutions exported in Figma | Export all densities in Figma, or fallback to default | Need Help? If you’d like help setting up, customizing or expanding this flow, feel free to reach out to our n8n automation team at WeblineIndia! We can help you: Fine-tune Figma queries Improve file renaming rules Integrate Slack / CI pipelines Add support for other platforms (iOS/Web) Happy automating!
by Miki Arai
Who is this for Anime Enthusiasts:** Users who want to automate their watchlists based on specific voice actors or creators. n8n Learners:** Anyone looking for a best-practice example of handling API rate limiting, loops, and data filtering. Calendar Power Users:** People who want to populate their personal schedule with external data sources automatically. What it does Search:** Queries the Jikan API for a specific person (e.g., Voice Actor "Mamoru Miyano"). Wait:** Pauses execution to respect the API rate limit. Retrieve:** Fetches the list of anime roles associated with that person. Loop & Filter:** Iterates through the list one by one, fetches detailed status, and filters for shows marked as "Not yet aired." Schedule:** Creates an event in your Google Calendar using the anime's title and release date. Setup Steps Configure Search: Open the 'Search Voice Actor' node. In "Query Parameters," change the value of q to the name of the voice actor or person you want to track. Connect Calendar: Open the 'Create an event' node. Select your Google Calendar credentials and choose the Calendar ID where events should be created. Test: Run the workflow manually to populate your calendar. Requirements An active n8n instance. A Google account (for Google Calendar). No API Key is required for Jikan API (Public), but rate limiting logic must be preserved. How to customize Change the Filter:** Modify the 'Check if Not Aired' node to track "Currently Airing" shows instead of upcoming ones. Enrich Event Details:** Update the 'Create an event' node to include the MyAnimeList URL or synopsis in the calendar event description. Search Different Entities:** Adjust the API endpoint to search for Manga authors or specific Studios instead of voice actors. Expected Result Upon successful execution, this workflow will: Search for the specified voice actor. Retrieve their upcoming anime roles. Create events in your Google Calendar for anime that hasn't aired yet. Example Calendar Entry: Title:** Anime Name - Release Date Description:** Details regarding the release...
by Paul Roussel
How it works • Upload foreground video (AI actors, product demos, webcam footage) • Provide custom background video URL • API removes video background with videobgremover.com • Composites foreground onto background • Downloads and uploads to Google Drive • Returns shareable link Set up steps • Get API key at https://videobgremover.com/n8n (2 min) • Import workflow (1 min) • Add API key to n8n variables as VIDEOBGREMOVER_KEY (1 min) • Connect Google Drive (2 min) • Test with manual trigger (1 min) • Total: 7 minutes What you'll need • VideoBGRemover API key ($0.50-$2.00 per minute) • Google Drive account • Publicly accessible video URLs • n8n instance Perfect for • AI UGC ad creators using HeyGen, Synthesia, Arcads • Marketing agencies creating ad variations • E-commerce product demos on custom backgrounds • Social media content with branded scenes • Video editors removing backgrounds at scale Key features • Video composition with custom templates • Audio mixing with adjustable volumes • 20-second polling for status • Google Drive integration • Webhook automation support • 3-5 minute processing time per minutes of input video
by PDF Vector
Automated Research Paper Analysis Pipeline This workflow automatically analyzes research papers by: Parsing PDF documents into clean Markdown format Extracting key information using AI analysis Generating concise summaries and insights Storing results in a database for future reference Perfect for researchers, students, and academics who need to quickly understand the key points of multiple research papers. How it works: Trigger: Manual trigger or webhook with PDF URL PDF Vector: Parses the PDF document with LLM enhancement OpenAI: Analyzes the parsed content to extract key findings, methodology, and conclusions Database: Stores the analysis results Output: Returns structured analysis data Setup: Configure PDF Vector credentials Set up OpenAI API key Connect your preferred database (PostgreSQL, MySQL, etc.)
by Joel Cantero
YouTube Caption Extractor (Your Channel Only) Extracts clean transcripts from YOUR CHANNEL YouTube video captions using YouTube Data API v3. ⚠️ API Limitation: Only works with videos from YOUR OWN CHANNEL. Cannot access external/public videos. 🎯 Use Cases AI summarization & sentiment analysis Keyword extraction from your content Content generation from your videos Batch transcript processing 🔄 How It Works (6 Steps) 📥 Input: Receives videoId + preferredLanguage 🔍 API: Lists captions from your channel 🆔 Selector: Picks preferred language (fallback to first) 📥 Download: Gets VTT subtitle file 🧹 Cleaning: Removes timestamps, [Music], duplicates ✅ Output: Clean transcript + metadata 🚀 How to Use Trigger with JSON payload: {"youtubeVideoId": "YOUR_VIDEO_ID", "preferredLanguage": "es"} Video ID must belong to your authenticated YouTube channel** Works as sub-workflow (Execute Workflow Trigger) or replace with Webhook/Form trigger Handles videos with no captions gracefully with structured error response Output ready for downstream AI processing or storage ⚠️ Setup Required: Change YouTube credentials* in *"List Captions"* and *"Download VTT"** nodes Video ID from your authenticated channel Sub-workflow or Webhook trigger Graceful no-captions handling 🔧 Requirements ✅ YouTube OAuth2 (youtube.captions.read scope) ✅ Update credentials in List Captions + Download VTT nodes ✅ n8n HTTP Request + Code nodes 💬 Need Help? n8n Forum Happy Automating! 🎉
by Panth1823
Daily n8n Workflow Backup Automatically backs up all workflows to Google Drive daily. How it works Triggers daily at 11 PM (or manually on demand) Creates a timestamped backup folder in Google Drive Fetches all workflows from your n8n instance Converts each workflow to a JSON file Uploads files to the backup folder Automatically deletes old backup folders to save storage Setup steps Ensure your n8n instance has API access enabled Connect your Google Drive account (OAuth2) Create a Google Drive folder for backups and copy its Folder ID Important: Open the 'Cleanup Old Backups' node and paste that Folder ID into the code
by Marker.io
Automatically create Intercom conversations with full technical context when your team receive new Marker.io issues 🎯 What this template does This workflow creates a seamless bridge between Marker.io and Intercom, your customer support platform. Every issue submitted through Marker.io's widget automatically becomes a trackable conversation in Intercom, complete with technical details and visual context. Centralizing customers issues in Intercom helps your support agents continue the conversation right where they work every day. When a bug is reported, the workflow: Creates or updates the reporter as an Intercom contact Opens a new conversation with the reporter and the all issue details Adds a comprehensive internal note with technical metadata Preserves all screenshots, browser info, and custom data ✨ Benefits Zero manual entry** - All bug details transfer automatically Instant visibility** - Support agents sees issues immediately Rich context** - Technical details preserved for developers Better collaboration** - Single source of truth for bugs Faster resolution** - No time wasted gathering information 💡 Use Cases Product Teams**: Streamline bug triage without switching tools Support Teams**: Get technical context for customer-reported issues Development Teams**: Access browser info, console logs and network logs directly from the support tickets 🔧 How it works n8n Webhook receives Marker.io bug report data Format and extract relevant information from the payload Create/update contact in Intercom with reporter details Start conversation with the Title and Bug description Add internal note with full technical context and Marker.io links for the support agent The result is a perfectly organized support ticket that your team can act on immediately, with all the context they need to reproduce and resolve the issue. 📋 Prerequisites Marker.io account** with webhook capabilities Intercom account** with API access Intercom Access Token** with appropriate permissions Admin ID** from your Intercom workspace 🚀 Setup Instructions Import this workflow into your n8n instance Configure the Webhook: Copy the test/production webhook URL after saving Add to Marker.io: Workspace Settings → Webhooks → Create webhook Select "Issue Created" as the trigger event Set up Intercom credentials: Create an Intercom app or use existing API credentials from the Intercom Develper Hub Add credentials to both HTTP Request nodes Update the admin_id in the "Add Internal Note" node with the id of one of your intercom admin Test the integration: Create a test issue in Marker.io Verify the conversation appears in Intercom Check that all data transfers correctly 📊 Data Captured Customer-facing message includes: Issue title Description Internal note includes: 🆔 Marker ID 📊 Priority level and issue type 📅 Due date (if set) 🖥️ Browser and OS details 🤓 Develper Console & Network logs 🌐 Website URL where issue occurred 🔗 Direct link to Marker.io issue 📷 Screenshot of the issue 📦 Any custom data fields → Read more about our webhook events
by Marker.io
Automatically create Zendesk tickets with full technical context when your team receives new Marker.io issues 🎯 What this template does This workflow creates a seamless bridge between Marker.io and Zendesk, your customer support platform. Every issue submitted through Marker.io's widget automatically becomes a trackable ticket in Zendesk, complete with technical details and visual context. Centralizing customer issues in Zendesk helps your support agents continue the conversation right where they work every day. When an issue is reported, the workflow: Creates or updates the reporter as a Zendesk user Opens a new ticket with all issue details Adds a comprehensive internal comment with technical metadata Preserves all screenshots, browser info, and custom data Automatically tags tickets for easy filtering ✨ Benefits Zero manual entry** - All bug details transfer automatically Instant visibility** - Support agents see issues immediately Rich context** - Technical details preserved for developers Better collaboration** - Single source of truth for bugs Faster resolution** - No time wasted gathering information Smart organization** - Auto-tagging for efficient triage 💡 Use Cases Product Teams**: Streamline bug triage without switching tools Support Teams**: Get technical context for customer-reported issues Development Teams**: Access browser info, console logs, and network logs directly from support tickets 🔧 How it works n8n Webhook receives Marker.io issue data Format and extract relevant information from the payload Create/update user in Zendesk with reporter details Create ticket with the title and issue description Add internal comment with screenshot, full technical context and Marker.io links for the support agent The result is a perfectly organized support ticket that your team can act on immediately, with all the context they need to reproduce and resolve the issue. 📋 Prerequisites Marker.io account** with webhook capabilities Zendesk account** with API access Zendesk API token** with appropriate permissions 🚀 Setup Instructions Import this workflow into your n8n instance Configure the Webhook: Copy the test/production webhook URL after saving Add to Marker.io: Workspace Settings → Webhooks → Create webhook Select "Issue Created" as the trigger event Set up Zendesk credentials: Generate an API token from Zendesk Admin Center → Apps and integrations → APIs → Zendesk API Add credentials to all three HTTP Request nodes Update your subdomain in the URLs (replace [REPLACE_SUBDOMAIN] with your subdomain) Customize fields (optional): Update the custom field ID in "Create Ticket" node if you want to store Marker ID Modify tags to match your workflow Adjust priority mapping if needed Test the integration: Create a test issue in Marker.io Verify the ticket appears in Zendesk Check that all data transfers correctly 📊 Data Captured Customer-facing ticket includes: Issue title (as subject) Description (as ticket body) Internal comment includes: 🆔 Marker ID 📊 Priority level and issue type 📅 Due date (if set) 🖥️ Browser and OS details 🤓 Developer console & network logs 🌐 Website URL where issue occurred 🔗 Direct link to Marker.io issue 📦 Any custom data fields → Read more about Marker.io webhook events
by System Admin
Tagged with: , , , ,