by Václav Čikl
Overview This workflow automates the entire process of creating professional subtitle (.SRT) and synced lyrics (.LRC) files from audio recordings. Upload your vocal track, let Whisper AI transcribe it with precise timestamps, and GPT-5-nano segments it into natural, singable lyric lines. With an optional quality control step, you can manually refine the output while maintaining perfect timestamp alignment. Key Features Whisper AI Transcription**: Word-level timestamps with multi-language support via ISO codes Intelligent Segmentation**: GPT-5-nano formats transcriptions into natural lyric lines (2-8 words per line) Quality Control Option**: Download, edit, and re-upload corrections with smart timestamp matching Advanced Alignment**: Levenshtein distance algorithm preserves timestamps during manual edits Dual Format Export**: Generate both .SRT (video subtitles) and .LRC (synced lyrics) files No Storage Needed**: Files generated in-memory for instant download Multi-Language**: Supports various languages through Whisper API Use Cases Generate synced lyrics for music video releases on YouTube Create .LRC files for Musixmatch, Apple Music, and Spotify Prepare professional subtitles for social media content Batch process subtitle files for catalog releases Maintain consistent lyric formatting across artists Streamline content delivery for streaming platforms Speed up video editing workflow Perfect For For Musicians & Artists For Record Labels For Content Creators What You'll Need Required Setup OpenAI API Key** for Whisper transcription and GPT-5-nano segmentation Recommended Input Format**: MP3 audio files (max 25MB) Content**: Clean vocal tracks work best (isolated vocals recommended, but whole tracks works still good) Languages**: Any language supported by Whisper (specify via ISO code) How It Works Automatic Mode (No Quality Check) Upload your MP3 vocal track to the workflow Transcription: Whisper AI processes audio with word-level timestamps Segmentation: GPT-5-nano formats text into natural lyric lines Generation: Workflow creates .SRT and .LRC files Download your ready-to-use subtitle files Manual Quality Control Mode Upload your MP3 vocal track and enable quality check Transcription: Whisper AI processes audio with timestamps Initial Segmentation: GPT-5-nano creates first draft Download the .TXT file for review Edit lyrics in any text editor (keep line structure intact) Re-upload corrected .TXT file Smart Matching: Advanced diff algorithm aligns changes with original timestamps Download final .SRT and .LRC files with perfect timing Technical Details Transcription API**: OpenAI Whisper (/v1/audio/transcriptions) Segmentation Model**: GPT-5-nano with custom lyric-focused prompt System Prompt*: *"You are helping with preparing song lyrics for musicians. Take the following transcription and split it into lyric-like lines. Keep lines short (2–8 words), natural for singing/rap phrasing, and do not change the wording." Timestamp Matching**: Levenshtein distance + alignment algorithm File Size Limit**: 25MB (n8n platform default) Processing**: All in-memory, no disk storage Cost**: Based on Whisper API usage (varies with audio length) Output Formats .SRT (SubRip Subtitle) Standard format for: YouTube video subtitles Video editing software (Premiere, DaVinci Resolve, etc.) Media players (VLC, etc.) .LRC (Lyric File) Synced lyrics format for: Musixmatch Apple Music Spotify Music streaming services Audio players with lyrics display Pro Tips 💡 For Best Results: Use isolated vocal tracks when possible (remove instrumentals) Ensure clear recordings with minimal background noise For quality check edits, only modify text content—don't change line breaks Test with shorter tracks first to optimize your workflow ⚙️ Customization Options: Adjust GPT segmentation style by modifying the system prompt Add language detection or force specific languages in Whisper settings Customize output file naming conventions in final nodes Extend workflow with additional format exports if needed Workflow Components Audio Input: Upload interface for MP3 files Whisper Transcribe: OpenAI API call with timestamp extraction Post-Processing: GPT-5-nano segmentation into lyric format Routing Quality Check: Decision point for manual review Timestamp Matching: Diff and alignment for corrected text Subtitles Preparation: JSON formatting for both output types File Generation: Convert to .SRT and .LRC formats Download Nodes: Export final files Template Author: Questions or need help with setup? 📧 Email:xciklv@gmail.com 💼 LinkedIn:https://www.linkedin.com/in/vaclavcikl/
by Le Nguyen
This template implements a recursive web crawler inside n8n. Starting from a given URL, it crawls linked pages up to a maximum depth (default: 3), extracts text and links, and returns the collected content via webhook. 🚀 How It Works 1) Webhook Trigger Accepts a JSON body with a url field. Example payload: { "url": "https://example.com" } 2) Initialization Sets crawl parameters: url, domain, maxDepth = 3, and depth = 0. Initializes global static data (pending, visited, queued, pages). 3) Recursive Crawling Fetches each page (HTTP Request). Extracts body text and links (HTML node). Cleans and deduplicates links. Filters out: External domains (only same-site is followed) Anchors (#), mailto/tel/javascript links Non-HTML files (.pdf, .docx, .xlsx, .pptx) 4) Depth Control & Queue Tracks visited URLs Stops at maxDepth to prevent infinite loops Uses SplitInBatches to loop the queue 5) Data Collection Saves each crawled page (url, depth, content) into pages[] When pending = 0, combines results 6) Output Responds via the Webhook node with: combinedContent (all pages concatenated) pages[] (array of individual results) Large results are chunked when exceeding ~12,000 characters 🛠️ Setup Instructions 1) Import Template Load from n8n Community Templates. 2) Configure Webhook Open the Webhook node Copy the Test URL (development) or Production URL (after deploy) You’ll POST crawl requests to this endpoint 3) Run a Test Send a POST with JSON: curl -X POST https://<your-n8n>/webhook/<id> \ -H "Content-Type: application/json" \ -d '{"url": "https://example.com"}' 4) View Response The crawler returns a JSON object containing combinedContent and pages[]. ⚙️ Configuration maxDepth** Default: 3. Adjust in the Init Crawl Params (Set) node. Timeouts** HTTP Request node timeout is 5 seconds per request; increase if needed. Filtering Rules** Only same-domain links are followed (apex and www treated as same-site) Skips anchors, mailto:, tel:, javascript: Skips document links (.pdf, .docx, .xlsx, .pptx) You can tweak the regex and logic in Queue & Dedup Links (Code) node 📌 Limitations No JavaScript rendering (static HTML only) No authentication/cookies/session handling Large sites can be slow or hit timeouts; chunking mitigates response size ✅ Example Use Cases Extract text across your site for AI ingestion / embeddings SEO/content audit and internal link checks Build a lightweight page corpus for downstream processing in n8n ⏱️ Estimated Setup Time ~10 minutes (import → set webhook → test request)
by Vince V
This workflow automatically generates and delivers professional invoice PDFs whenever a Stripe checkout session completes. It fetches the line items from Stripe, formats them into a clean invoice with your company details, generates a branded PDF via TemplateFox, emails it to the customer, and saves a copy to Google Drive. Problem Solved Without this automation, invoicing after a Stripe payment requires: Monitoring your Stripe dashboard for completed checkouts Manually creating an invoice with the correct line items and totals Exporting as PDF and emailing it to the customer Saving the invoice to your file storage for bookkeeping Repeating this for every single payment This workflow handles all of that automatically for every Stripe checkout, including proper invoice numbering, due dates, and tax calculations. Who Can Benefit SaaS companies** billing customers through Stripe Checkout E-commerce stores** sending invoices after purchase Service providers** using Stripe for client payments Freelancers** who want automatic invoicing after payment Accountants** who need invoice PDFs archived in Google Drive Prerequisites TemplateFox account with an API key (free tier available) Stripe account with API access Gmail account with OAuth2 configured Google Drive account with OAuth2 configured Install the TemplateFox community node from Settings → Community Nodes Setting Up Your Template You need a TemplateFox invoice template for this workflow. You can: Start from an example — Browse invoice templates, pick one you like, and customize it in the visual editor to match your branding Create from scratch — Design your own invoice template in the TemplateFox editor Once your template is ready, select it from the dropdown in the TemplateFox node — no need to copy template IDs manually. Workflow Details Step 1: Stripe Trigger Fires on every completed checkout session (checkout.session.completed). This captures successful payments with full customer and product details. Step 2: Get Line Items An HTTP Request node calls the Stripe API to fetch the line items for the checkout session (product names, quantities, amounts). Stripe doesn't include line items in the webhook payload, so this separate call is required. Step 3: Format Invoice Data A Code node combines the Stripe session data and line items into a clean invoice structure: company details, client info (from Stripe customer), line items with prices, subtotal, tax, total, invoice number (auto-generated from date + session ID), and due date (Net 30). Step 4: TemplateFox — Generate Invoice Select your invoice template from the dropdown — the node automatically loads your template's fields. Map each field to the matching output from the Code node (e.g. client_company → {{ $json.client_company }}). TemplateFox generates a professional invoice PDF using your custom template. Step 5a: Email Invoice Sends the invoice PDF link to the customer via Gmail with invoice number, amount, and due date. Step 5b: Save to Google Drive Downloads the PDF and uploads it to a Google Drive folder for bookkeeping. Runs in parallel with the email step. Customization Guidance Company details:** Set your company name, address, logo, bank details, and VAT number directly in the template editor — they never change between invoices, so there's no reason to pass them from n8n. Invoice numbering:** Modify the invoiceNumber format in the Code node (default: INV-YYYY-MMDD-XXXXXX). Payment terms:** Change the due date calculation (default: Net 30). Drive folder:** Set your Google Drive folder ID in the "Save to Google Drive" node. Template:** Use any invoice template from your TemplateFox account — select it from the dropdown. Email body:** Customize the invoice email text in the "Email Invoice" node. Note: This template uses the TemplateFox community node. Install it from Settings → Community Nodes.
by Firecrawl
What this does Uses Firecrawl to scrape any company website and extract structured business signals from it. The enriched profile is automatically saved to Supabase. A self-hosted, free alternative to paid enrichment APIs like Apollo or Clay, powered by Firecrawl. How it works Webhook receives a POST request with a url field (bare domain or full URL) Verify URL node validates and normalizes the domain Firecrawl scrapes the target website and searches for additional company data AI Agent (OpenRouter) extracts structured business signals from the scraped content Structured Output Parser formats the result into a clean JSON profile Supabase checks for duplicates before inserting, then saves the enriched profile Respond to Webhook returns the enriched result (or a 422 error if the URL was invalid) Business signals extracted Company name, industry, pricing model, free trial availability, employee size signal, funding stage, tech stack and integrations detected, target customer profile, trust signals (certifications, reviews, customer count), hiring status and open roles count. Requirements Firecrawl API key OpenRouter API key (or swap for any OpenAI-compatible model) Supabase project (setup SQL provided below) Setup Create a Supabase project and run the following SQL in the SQL editor: CREATE TABLE lead_enrichment ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), created_at TIMESTAMPTZ DEFAULT now(), updated_at TIMESTAMPTZ DEFAULT now(), domain TEXT NOT NULL UNIQUE, company_name TEXT, industry TEXT, pricing_model TEXT, has_free_trial BOOLEAN, employee_signal TEXT, funding_stage TEXT, tech_stack TEXT[], integrations TEXT[], target_customer TEXT, trust_signals TEXT[], hiring BOOLEAN, open_roles_count INT, raw_scraped_text TEXT, enrichment_source TEXT DEFAULT 'firecrawl' ); CREATE OR REPLACE FUNCTION update_updated_at() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = now(); RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER set_updated_at BEFORE UPDATE ON lead_enrichment FOR EACH ROW EXECUTE FUNCTION update_updated_at(); Add your Firecrawl API key as a credential in n8n Add your OpenRouter API key as a credential (or swap for any OpenAI-compatible provider) Add your Supabase credentials (project URL + service role key) Activate the workflow How to use Send a POST request to the webhook URL: curl -X POST https://your-n8n-instance/webhook/your-id \ -H "Content-Type: application/json" \ -d '{"url": "firecrawl.dev"}' `
by Pixril
Who is this for? This workflow is designed for social media managers, marketing agencies, and business owners who want to automate their Facebook and Instagram posting without losing quality control. It is perfect if you manage content in Google Sheets and want AI to write your captions, but you still need a human to review the final post before it goes live to your audience. What this workflow does This workflow acts as a complete social media auto-poster. It reads your product inventory from Google Sheets, finds items that haven't been promoted recently, and uses OpenAI to write a professional, engaging caption. Before publishing, the workflow pauses and sends you a simple web form. Once you click "Approve," it automatically publishes the post to Facebook and Instagram, then logs the successful post back in your Google Sheet. Key Features Smart Google Sheets Tracking:** Uses your spreadsheet as an inventory tracker to ensure you never over-promote the same product twice. AI Copywriting:** Automatically drafts platform-optimized, emoji-free captions based on your product data. Approval Gate:** Pauses the automation to let you manually review and approve the AI-generated content via an n8n web form. Multi-Platform Auto-Posting:** Connects directly to the Facebook Graph API to publish to your Facebook Page and Instagram Business Account. How it works Trigger: Open the n8n form and choose to promote a specific product or a general agency pitch. Selection: The workflow scans your Google Sheet and picks the item with the lowest "Times Posted" count. Generation: OpenAI writes a tailored caption for the selected item. Review: You receive an interactive prompt to review the text and image URL. Publishing: Upon approval, it posts instantly to Meta (Facebook/Instagram) and increments the post count in your Google Sheet. Set up steps Estimated time: 10 minutes Google Sheets: Connect your Google credential to the Sheets nodes and add your Spreadsheet ID. OpenAI Key: Add your API key to the "AI Copywriter" node. Meta API: Connect your Facebook Graph API credentials to the FB/IG nodes and ensure your Page ID is correct. Run: Click "Test URL" on the Pixril Dispatcher form to trigger your first post! About the Creator Built by Pixril. We specialize in building advanced, production-ready AI workflows and automation templates for n8n. Find more professional workflows in our shop: https://pixril.etsy.com
by Mohamed Salama
Let AI agents fetch communicate with your Bubble app automatically. It connects direcly with your Bubble data API. This workflow is designed for teams building AI tools or copilots that need seamless access to Bubble backend data via natural language queries. How it works Triggered via a webhook from an AI agent using the MCP (Model-Chain Prompt) protocol. The agent selects the appropriate data tool (e.g., projects, user, bookings) based on user intent. The workflow queries your Bubble database and returns the result. Ideal for integrating with ChatGPT, n8n AI-Agents, assistants, or autonomous workflows that need real-time access to app data. Set up steps Enable access to your Bubble data or backend APIs (as needed). Create a Bubble admin token. Add your Bubble node/s to your n8n workflow. Add your Bubble admin token. Configer your Bubble node/s. Copy the generated webhook URL from the MCP Server Trigger node and register it with your AI tool (e.g., LangChain tool loader). (Optional) Adjust filters in the “Get an Object Details” node to match your dataset needs. Once connected, your AI agents can automatically retrieve context-aware data from your Bubble app, no manual lookups required.
by David Ashby
Complete MCP server exposing 14 doqs.dev | PDF filling API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add doqs.dev | PDF filling API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the doqs.dev | PDF filling API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.doqs.dev/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (14 total) 🔧 Designer (7 endpoints) • GET /designer/templates/: List Templates • POST /designer/templates/: Create Template • POST /designer/templates/preview: Preview • DELETE /designer/templates/{id}: Delete • GET /designer/templates/{id}: List Templates • PUT /designer/templates/{id}: Update Template • POST /designer/templates/{id}/generate: Generate Pdf 🔧 Templates (7 endpoints) • GET /templates: List • POST /templates: Create • DELETE /templates/{id}: Delete • GET /templates/{id}: Get Template • PUT /templates/{id}: Update • GET /templates/{id}/file: Get File • POST /templates/{id}/fill: Fill 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native doqs.dev | PDF filling API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Yves Tkaczyk
Use cases Monitor Google Drive folder, parsing PDF, DOCX and image file into a destination folder, ready for further processing (e.g. RAG ingestion, translation, etc.) Keep processing log in Google Sheet and send Slack notifications. How it works Trigger: Watch Google Drive folder for new and updated files. Create a uniquely named destination folder, copying the input file. Parse the file using Mistral Document, extracting content and handling non-OCRable images separately. Save the data returned by Mistral Document into the destination Google Drive folder (raw JSON file, Markdown files, and images) for further processing. How to use Google Drive and Google Sheets nodes: Create Google credentials with access to Google Drive and Google Sheets. Read more about Google Credentials. Update all Google Drive and Google Sheets nodes (14 nodes total) to use the credentials Mistral node: Create Mistral Cloud API credentials. Read more about Mistral Cloud Credentials. Update the OCR Document node to use the Mistral Cloud credentials. Slack nodes: Create Slack OAuth2 credentials. Read more about Slack OAuth2 credentials Update the two Slack nodes: Send Success Message and Send Error Message: Set the credentials Select the channel where you want to send the notifications (channels can be different for success and errors). Create a Google Sheets spreadsheet following the steps in Google Sheets Configuration. Ensure the spreadsheet can be accessed as Editor by the account used by the Google Credentials above. Create a directory for input files and a directory for output folders/files. Ensure the directories can be accessed by the account used by the Google Credentials. Update the File Created, File Updated and Workflow Configuration node following the steps in the green Notes. Requirements Google account with Google API access Mistral Cloud account access to Mistral API key. Slack account with access to Slack client ID and secret ID. Basic n8n knowledge: understanding of triggers, expressions, and credential management Who’s it for Anyone building a data pipeline ingesting files to be OCRed for further processing. 🔒 Security All credentials are stored as n8n credentials. The only information stored in this workflow that could be considered sensitive are the Google Drive Directory and Sheet IDs. These directories and the spreadsheet should be secured according to your needs. Need Help? Reach out on LinkedIn or Ask in the Forum!
by Corentin Ribeyre
This template can be used as a real-time listening and processing of search results with Icypeas. Be sure to have an active account to use this template. How it works This workflow can be divided into two steps : A webhook node to link your Icypeas account with your n8n workflow. A set node to retrieve the relevant informations. Set up steps You will need a working icypeas account to run the workflow and you will have to paste the production URL provided by the n8n webhook node.
by Pixril
Overview This workflow deploys a fully autonomous "Viral News Agency" inside your n8n instance. Unlike simple auto-posters, this is a comprehensive content production pipeline. It acts as a 24/7 news monitor that scrapes viral stories, rewrites them into educational scripts using GPT-4o, designs professional 10-slide carousels, and publishes them directly to Instagram Business—completely on autopilot. Key Features Dual-Engine Architecture:* The unique "Hybrid Core" lets you choose between *Free (Gotenberg/Docker)* or *Paid (APITemplate)** image generation. Switch engines instantly via the Setup Form. Smart RSS Scraping:** Cleans incoming feeds and extracts high-quality "OG" (Open Graph) images to use as dynamic backgrounds. Viral Content Writer:** Uses a specialized AI Agent prompt to write "Hot Takes" and educational hooks, ensuring content is engaging, not just a summary. Auto-Publisher:** Handles the complex Meta API flow (Container > Media Bundle > Publish) to upload multi-slide carousels automatically. How it works Monitor: The News Source node watches your chosen RSS feeds (Tech, Sports, Politics, etc.) for breaking stories. Analyze: The AI Analyst (GPT-4o) reads the article, extracts the viral angle, and writes a full 10-slide script with captions and hashtags. Design: The workflow routes data to your chosen engine. It loops through the script 10 times to generate individual slides (Title, Content, Quotes). Publish: The agent uploads the images to Facebook's servers, bundles them into a Carousel Container, and publishes it live to your Instagram feed. Set up steps Estimated time: 10 minutes Credentials: Add your keys for OpenAI (Intelligence), Google Drive (Storage), and Facebook Graph API (Publishing). Instagram ID: Open the 3 Facebook nodes ("Create Container", "Carousel Bundle", "Publish Carousel") and replace the placeholder ID with your Instagram Business User ID. Image Engine: Option A (Free): Ensure you have a local Gotenberg instance running via Docker (docker run --rm -p 3000:3000 gotenberg/gotenberg:8). Option B (Paid): In the "Generate Image" node, add your APITemplate API Key and Template ID. Run: Use the "SETUP FORM" node to enter your RSS URL and Brand Name, then toggle to "Active"! About the Creator Built by Pixril. We specialize in building advanced, production-ready AI agents for n8n. Visit our website: https://www.pixril.com/ Find more professional workflows in our shop: https://pixril.etsy.com
by Nguyen Thieu Toan
Auto-reply Instagram DM with AI Chatbot, Conversation History using Google Gemini and n8n Data Table This workflow turns your Instagram Business or Creator account into an AI-powered customer support chatbot using Google Gemini and n8n Data Table for persistent conversation history. Every incoming Direct Message is automatically received, processed, and replied to — with long AI responses intelligently split and delivered in sequence. If you need to automate Instagram DM responses without managing complex infrastructure, this workflow is the right starting point. How it works Instagram Webhook receives the DM:** Meta sends the event to n8n. The workflow automatically handles both webhook verification and incoming message events. Text-only messages are filtered; bot reply-backs from the page itself are blocked. Set Context extracts all runtime config:** The sender ID, page ID, access token, and raw message text are extracted from a single node — no hardcoded values elsewhere. Message is stored and history is loaded:* The new message is saved to the *n8n Data Table** as unprocessed. All pending rows for the user are merged into one prompt (basic batching). The last 15 processed rows are loaded and formatted into session history blocks for context. Gemini AI Agent generates the reply:* The merged prompt and full conversation history are passed to *Google Gemini**. The AI responds in context, following the persona and instructions defined in the system prompt. Reply is formatted and delivered:* The response is normalized (markdown stripped, unsupported syntax removed) and split into chunks of up to 2000 characters. Each chunk is sent sequentially via the *Instagram Graph API** with a 1-second delay between messages. Data Table is updated and cleaned:** All pending rows are marked as processed. The AI reply is saved as the page response. Old rows beyond the 15-message window are automatically deleted to keep the table lean. How to use Create a Meta App and add the Instagram product. Go to Instagram > API Setup with credentials → log in to your IG Business/Creator account → copy the long-lived Access Token → paste it into the Set Context node (ig_access_token field). Activate the workflow in production mode first, then go to Meta App > Instagram > Webhooks: paste the n8n production webhook URL + a Verify Token → click Verify. Connect Google Gemini (googlePalmApi) credential in the AI Agent and LLM nodes. Create the n8n Data Table Edit the system prompt inside Process Merged Message to set your AI persona, brand tone, and knowledge base. Activate and start receiving automated replies on Instagram DM. Requirements n8n Version:* Built and tested on *n8n 2.9.4+*. *(It is highly recommended to update to the latest n8n version to avoid compatibility issues.) Instagram Business or Creator account** connected to a Meta App with Messaging permissions. Google Gemini** API key (googlePalmApi credential). n8n Data Table** named insert_message with the column schema described above. A publicly accessible n8n instance (self-hosted or cloud) for Meta to reach the webhook. Customizing this workflow Change the AI persona:** Edit only the system prompt inside Process Merged Message — no other node needs changing. Switch the AI model:** Swap the Google Gemini Chat Model sub-node for any other supported LLM (OpenAI, Anthropic, etc.). Add smart message batching:* Integrate *Smart message batching workflow** to wait for the user to finish typing before responding — prevents duplicate or out-of-order replies. Add human takeover:* Integrate *Smart human takeover workflow** to automatically pause the bot when an admin replies manually, then resume when done. Use on Facebook Messenger too:** The Smart Batching and Human Takeover workflows above are 100% compatible with both Facebook and Instagram — built to be cross-platform from the ground up. About the Author Created by: Nguyễn Thiệu Toàn (Jay Nguyen) Email: me@nguyenthieutoan.com Website: nguyenthieutoan.com Company: GenStaff (genstaff.net) Socials (Facebook / X / LinkedIn): @nguyenthieutoan More templates: n8n.io/creators/nguyenthieutoan
by Influencers Club
How it works: Find lookalikes to other creators and add to your CRM for influencer outreach and partnerships. Step by step workflow to discover similar creators to your best performers with multi social (Instagram, Tiktok, Youtube, Twitter, Onlyfans, Twitch and more) profiles, analytics and metrics using the influencers.club API, and add the contact records and data in Hubspot Set up: Hubspot (can be swapped for any CRM like Salesforce or Google Sheet) Influencers.club API key