by Muhammad Farooq Iqbal
This n8n template demonstrates how to create authentic-looking User Generated Content (UGC) advertisements using AI image generation, voice synthesis, and lip-sync technology. The workflow transforms product images into realistic customer testimonial videos that mimic genuine user reviews and social media content. Use cases are many: Generate authentic UGC-style ads for social media campaigns, create customer testimonial videos without hiring influencers, produce localized UGC content for different markets, automate TikTok/Instagram-style product reviews, or scale UGC ad production for e-commerce brands! Good to know The workflow creates UGC-style content that appears genuine and authentic Uses multiple AI services: OpenAI GPT-4o for analysis, ElevenLabs for voice synthesis, and WaveSpeed AI for image generation and lip-sync Voice synthesis costs vary by ElevenLabs plan (typically $0.18-$0.30 per 1K characters) WaveSpeed AI pricing: ~$0.039 per image generation, additional costs for lip-sync processing Processing time: ~3-5 minutes per complete UGC video Optimized for Malaysian-English content but easily adaptable for global markets How it works Product Input: The Telegram bot receives product images to create UGC ads for AI Analysis: ChatGPT-4o analyzes the product to understand brand, colors, and target demographics UGC Content Creation: AI generates authentic-sounding testimonial scripts and detailed prompts for realistic customer scenarios Character Generation: WaveSpeed AI creates believable customer avatars that look like real users reviewing products Voice Synthesis: ElevenLabs generates natural, conversational audio using gender-appropriate voice models UGC Video Production: WaveSpeed AI combines generated characters with audio to create TikTok/Instagram-style review videos Content Delivery: Final UGC videos are delivered via Telegram, ready for social media posting The workflow produces UGC-style content that maintains authenticity while showcasing products in realistic, relatable scenarios that resonate with target audiences. How to use Setup Credentials: Configure OpenAI API, ElevenLabs API, WaveSpeed AI API, Cloudinary, and Telegram Bot credentials Deploy Workflow: Import the template and activate the workflow Send Product Images: Use the Telegram bot to send product images you want to create UGC ads for Automatic UGC Generation: The workflow will automatically create authentic-looking customer testimonial videos Receive UGC Content: Get both testimonial images and final UGC videos ready for social media campaigns Pro tip: The workflow automatically detects product demographics and creates appropriate customer personas. For best UGC results, use clear product images that show the item in use. Requirements OpenAI API** account for GPT-4o product analysis and UGC script generation ElevenLabs API** account for authentic voice synthesis (requires voice cloning credits) WaveSpeed AI API** account for realistic character generation and lip-sync processing Cloudinary** account for UGC content storage and hosting Telegram Bot** setup for content input and delivery n8n** instance (cloud or self-hosted) Customizing this workflow Platform-Specific UGC: Modify prompts to create UGC content optimized for TikTok, Instagram Reels, YouTube Shorts, or Facebook Stories. Brand Voice: Adjust testimonial scripts and character personas to match your brand's target audience and tone. Regional Adaptation: Customize language, cultural references, and character demographics for different markets and demographics. UGC Style Variations: Create different UGC formats - unboxing videos, before/after comparisons, day-in-the-life content, or product demonstrations. Influencer Personas: Develop specific customer personas (age groups, lifestyles, interests) to create targeted UGC content for different audience segments. Content Scaling: Set up batch processing to generate multiple UGC variations for A/B testing different approaches and styles.
by phil
This workflow is designed for B2B professionals to automatically identify and summarize business opportunities from a company's website. By leveraging Bright Data's Web Unblocker and advanced AI models from OpenRouter, it scrapes relevant company pages ("About Us", "Team", "Contact"), analyzes the content for potential pain points and needs, and synthesizes a concise, actionable report. The final output is formatted for direct use in documents, making it an ideal tool for sales, marketing, and business development teams to prepare for prospecting calls or personalize outreach. Who's it for This template is ideal for: B2B Sales Teams:** Quickly find and qualify leads by identifying specific business needs before a cold call. Marketing Agencies:** Develop personalized content and value propositions based on a prospect's public website information. Business Development Professionals:** Efficiently research potential partners or clients and discover collaboration opportunities. Entrepreneurs:** Gain a competitive edge by understanding a competitor's strategy or a potential client's operations. How it works The workflow is triggered by a chat message, typically a URL from an n8n chat application. It uses Bright Data to scrape the website's sitemap and extract all anchor links from the homepage. An AI agent analyzes the extracted URLs to filter for pages relevant to company information (e.g., "about-us," "team," "contact"). The workflow then scrapes the content of these specific pages. A second AI agent summarizes the content of each page, looking for business opportunities related to AI-powered automation. The summaries are merged and a final AI agent synthesizes them into a single, cohesive report, formatted for easy reading in a Google Doc. How to set up Bright Data Credentials: Sign up for a Bright Data account and create a Web Unblocker zone. In n8n, create new Bright Data API credentials and copy your API key. OpenRouter Credentials: Create an account on OpenRouter and get your API key. In n8n, create new OpenRouter API credentials and paste your key. Chat Trigger Node: Configure the "When chat message received" node. Copy the production webhook URL to integrate with your preferred chat platform. Requirements An active n8n instance. A Bright Data account with a Web Unblocker zone. An OpenRouter account with API access. How to customize this workflow AI Prompting:** Edit the "systemMessage" parameters in the "AI Agent", "AI Agent1", and "AI Agent2" nodes to change the focus of the opportunity analysis. For example, modify the prompts to search for specific technologies, industry jargon, or different types of business challenges. Model Selection:** The workflow uses openai/o4-mini and openai/gpt-5. You can change these to other models available on OpenRouter by editing the model parameter in the OpenRouter Chat Model nodes. Scraping Logic:** The extract url node uses a regular expression to find `` tags. This can be modified or replaced with an HTML Extraction node to target different elements or content on a website. Output Format:** The final output is designed for Google Docs. You can modify the last "AI Agent2" node's prompt to generate the output in a different format, such as a simple JSON object or a markdown list. Phil | Inforeole 🇫🇷 Contactez nous pour automatiser vos processus
by Davide
This workflow demonstrates how to create viral AI-generated selfie videos featuring famous characters using a fully automated and platform-independent approach. The process is designed to replicate the kind of celebrity selfie videos that are currently going viral on social media and YouTube, where a realistic selfie-style video appears to show the creator together with a well-known public figure. Instead of relying on a proprietary or closed platform, the workflow explains how to build the entire pipeline using direct access to Google Veo 3.1 APIs, giving full control over generation, orchestration, and distribution. Key Advantages 1. ✅ Fully automated video pipeline From prompt to final published video, the entire process runs without manual intervention. 2. ✅ Spreadsheet-driven control Non-technical users can manage video production simply by editing Google Sheets: Add new prompts Adjust duration Control merge logic 3. ✅ Scalable and modular Supports batch processing of many videos Easy to extend with new AI models, platforms, or output formats 4. ✅ Reliable async handling Built-in wait and status-check logic ensures robustness Prevents failures caused by long-running AI jobs 5. ✅ Centralized asset management Automatically stores video URLs and statuses Keeps production data organized and auditable 6. ✅ Multi-platform ready One generated video can be reused for: YouTube TikTok Instagram Other social channels 7. ✅ Cost and time efficiency Eliminates repetitive manual video editing Reduces production time from hours to minutes Ideal Use Cases AI-generated storytelling videos Social media content automation Marketing video campaigns Short-form video experiments at scale Faceless or semi-automated content channels How it Works This workflow automates the generation of short video clips using AI, merges them into a final video, and optionally uploads the result to multiple platforms. Trigger & Data Fetching The workflow starts with a manual trigger. It reads a Google Sheet containing prompts, image URLs (first and last frames), and duration settings for each video clip to be generated. Video Clip Generation For each row in the sheet, the workflow calls the fal.ai VEO 3.1 API to generate a video clip based on the provided prompt, start image, end image, and duration. The clip is created asynchronously, so the workflow polls the API for status until completion. Status Polling & URL Retrieval Once a clip is marked as COMPLETED, its video URL is fetched and written back to the Google Sheet in the corresponding row. Video Merging After all clips are generated, the workflow collects the video URLs from rows marked for merging and sends them to the fal.ai FFmpeg API to be combined into a single video. Final Video Processing The merged video is polled until ready, then its final URL is retrieved. The video file is downloaded via HTTP request. Upload & Distribution The final video can be uploaded to: Google Drive YouTube (via upload-post.com API) Postiz (for multi-platform social media posting) Each upload step is currently disabled and requires configuration (usernames, titles, platform settings). WARNING It may happen that the workflow stops at the video generation node with the following message: > Your request is invalid or could not be processed by the service [item 0] > The content could not be processed because it contained material flagged by a content checker. This occurs because images are checked both before and after the video generation process. If this happens, you can either use less restrictive video models while keeping the same workflow structure, or change the source images in the Google Sheets file. Set Up Steps Google Sheets Setup Prepare a Google Sheet with columns: START, LAST, PROMPT, DURATION, VIDEO URL, MERGE Connect n8n to Google Sheets using OAuth2 credentials. Fal.ai API Configuration Obtain an API key from fal.ai. Set up HTTP Header Auth credentials in n8n with the key. Upload Services Configuration Google Drive: Configure OAuth2 credentials and specify the target folder ID. YouTube/upload-post.com: Enter your username and title in the respective node. Postiz: Set up Postiz API credentials and configure platform channels. Enable Required Nodes Enable the upload nodes (Upload Video, Upload to Youtube, Upload to Postiz, Upload to Social) once credentials are configured. Adjust Polling Intervals Modify wait times (Wait 30 sec., Wait 60 sec.) as needed based on video processing times. Test Execution Start the workflow manually via the trigger node. Monitor execution in n8n’s editor and check the Google Sheet for updated video URLs. This workflow is designed for batch video creation and merging, ideal for content pipelines involving AI-generated media. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Rajeet Nair
Overview This workflow implements an AI-powered incident investigation and root cause analysis system that automatically analyzes operational signals when a system incident occurs. When an incident is triggered via webhook, the workflow gathers operational context including application logs, system metrics, recent deployments, and feature flag changes. These signals are processed to detect error patterns, cluster similar failures, and correlate them with recent system changes. The workflow uses vector embeddings to group similar log messages, allowing it to detect dominant failure patterns across services. It then aligns these failures with contextual events such as deployments, configuration changes, or traffic spikes to identify potential causal relationships. An AI agent analyzes all available evidence and generates structured root cause hypotheses, including confidence scores, supporting evidence, and recommended remediation actions. Finally, the workflow posts a detailed incident report directly to Slack, enabling engineering teams to quickly understand the issue and respond faster. This architecture helps teams reduce mean time to resolution (MTTR) by automating the early stages of incident investigation. How It Works 1. Incident Trigger The workflow begins when an incident alert is received through a webhook endpoint. The webhook payload may include information such as: incident ID severity level timestamp affected service This event starts the automated investigation process. 2. Workflow Configuration A configuration node defines the operational parameters used throughout the workflow, including: Logs API endpoint Metrics API endpoint Deployments API endpoint Feature flags API endpoint Time window for analysis Slack channel for incident notifications This allows the workflow to be easily adapted to different observability stacks. 3. Incident Context Collection The workflow collects system context from multiple sources: application logs infrastructure or service metrics recent deployments active feature flags Gathering this information provides the signals required to understand what happened before and during the incident. 4. Log Normalization and Denoising Raw logs are processed to remove low-value entries such as debug or informational messages. The workflow extracts structured error information including: timestamps log severity services involved request or session IDs error messages and stack traces This step ensures that only relevant failure signals are analyzed. 5. Failure Pattern Clustering Error messages are converted into embeddings using OpenAI. The workflow stores these embeddings in an in-memory vector store to group similar log messages together. This clustering step identifies dominant failure patterns that may appear across multiple sessions or services. 6. Failure Pattern Analysis Clustered log data is analyzed to detect recurring error types and dominant failure clusters. The workflow calculates statistics such as: total error volume most common error types error distribution across clusters dominant failure patterns These insights help highlight the primary issues affecting the system. 7. Event Correlation Analysis Failure patterns are then aligned with contextual events such as: deployments configuration changes traffic spikes The workflow calculates correlation scores based on temporal proximity and assigns likelihood scores to potential causes. This allows the system to identify events that may have triggered the incident. 8. AI Root Cause Analysis An AI agent analyzes the collected signals and generates structured root cause hypotheses. The agent considers: error clusters deployment timing configuration changes traffic patterns system metrics The output includes: multiple root cause hypotheses confidence scores supporting evidence recommended remediation actions 9. Incident Ticket Creation The final analysis is formatted into a structured incident report and posted to Slack. The Slack message contains: incident metadata root cause hypotheses confidence scores evidence recommended actions affected services This enables engineers to quickly review the investigation results and take action. Setup Instructions 1. Configure Observability APIs Update the Workflow Configuration node with API endpoints for: Logs API Metrics API Deployments API Feature Flags API These APIs should return JSON responses containing recent operational data. 2. Configure OpenAI Credentials Add OpenAI credentials for: OpenAI Embeddings OpenAI Chat Model These are used for log clustering and root cause analysis. 3. Configure Slack Integration Add Slack credentials and specify the Slack channel ID in the configuration node. Incident reports will be posted automatically to this channel. 4. Configure the Incident Trigger Deploy the webhook endpoint generated by the Incident Trigger node. Your monitoring or alerting system (PagerDuty, Grafana, Datadog, etc.) can call this webhook when incidents occur. 5. Activate the Workflow Once configured, activate the workflow in n8n. When incidents are triggered, the workflow will automatically run the investigation pipeline and generate a Slack incident report. Use Cases Automated Incident Investigation Automatically analyze operational signals when alerts are triggered to identify possible causes. AI-Assisted Site Reliability Engineering Provide engineers with AI-generated root cause hypotheses and investigation insights. Deployment Impact Detection Detect whether a recent deployment or configuration change caused a system failure. Observability Signal Correlation Combine logs, metrics, and system events to produce a unified incident analysis. Faster Incident Response Reduce mean time to resolution (MTTR) by automating the early stages of incident debugging. Requirements n8n with LangChain nodes enabled OpenAI API credentials Slack credentials APIs for retrieving: system logs service metrics deployment history feature flag status
by zawanah
Categorise and route emails with GPT 5 This workflow demonstrates how to use AI text classifier to classify incoming emails, and uses a multi-agent architecture to respond for each email category respectively. Use cases Business owners with a lot of incoming emails, or anyone who has huge influx of emails How it Works Any incoming emails will be read by the text classifier powered by GPT 5, and routed according to the defined categories where respective agents will take next steps. Workflow is triggered when an email comes in GPT will read email's "subject","from" and "content" to route it accurately to respective designated categories For customer support enquiries, customer support agent will take knowledge from the pinecone vector database about FAQs and policies, reply via gmail, and label the email as "Customer Support" For finance-related queries, finance agent will label email as "Finance" and assess if email is about making payment or receiving from customers. If payment-related, email will be sent to the payments team to take action. If receipts-related, email will be sent to the receivables team to take action. User will be notified via telegram after any email is sent. For sales/leads enquiries, leads agent will label the email as "Sales Opportunities", take knowledge from the pinecone vector database about the business to generate a response and draft into gmail and user will be notified via telegram to review and send. If there is lack of information for agent to generate a response, user will be notified of this via telegram as well. Any internal team member emails will be routed to the internal agent. The agent will label message as "Internal" and send user a summary of the email message via telegram. How to set up Set up Telegram bot via Botfather. See setup instructions here Setup OpenAI API for transcription services (Credits required) here Set up Openrouter account. See details here Set up Pinecone database. See details here Customization Options Other than Gmail, it is possible to connect to Outlook as well. Other than Pinecone vector database, there are other vector database that should serve the same purpose eg. supabase, qdrant, weviate Requirements Gmail account Telegram bot Pinecone account Open router account
by Davide
This workflow automates the entire process of collecting, analyzing, and reporting customer reviews from Feedaty (similar to Trustpilot) using ScrapeGraphAI, transforming raw user feedback into a structured, management-ready reputation report in PDF using new Gemini 3 model and ConvertAPI & Upload to Google Drive. Key Advantages ✅ End-to-End Automation From data collection to final PDF delivery, the entire reputation analysis process is fully automated, eliminating manual scraping, copy-paste work, and reporting overhead. ✅ AI-Driven, Management-Ready Insights The workflow does not just summarize reviews it interprets them strategically, producing insights that are immediately useful for: Management Marketing Customer Support Operations Product & UX teams ✅ Structured & Consistent Reporting Every execution produces reports with the same structure, metrics, and logic, making it ideal for: Periodic reputation monitoring Trend analysis over time Internal performance reviews ✅ Scalable & Configurable Easily adaptable to any Feedaty company profile Page limits and review volume can be adjusted without changing logic Can be scheduled or extended to multiple brands ✅ Data Quality & Compliance No personal data exposure Explicit handling of missing or ambiguous information No assumptions or hallucinated insights Fully transparent and audit-friendly output ✅ Seamless Stakeholder Distribution Automatic upload to Google Drive ensures reports are centralized, shareable, and accessible, with no additional manual steps. Ideal Use Cases Brand & reputation monitoring Customer experience audits Quarterly or monthly executive reports Pre-sales or investor documentation Customer support performance evaluation How it works This workflow automates the entire process of collecting, analyzing, and reporting customer feedback from Feedaty. It starts by scraping live reviews from a specified company's Feedaty page using ScrapeGraphAI, extracting review details like date, rating, and text. Each review is then individually analyzed for sentiment (Positive, Neutral, or Negative) using an AI model. All processed reviews are aggregated and passed to a specialized AI agent that performs a comprehensive company-level reputation analysis, generating a structured management report. Finally, the report is converted into an HTML/PDF format and uploaded to a designated Google Drive folder, creating a fully automated pipeline from data collection to actionable insights delivery. Set up steps Configure Parameters: Set the Feedaty company identifier (e.g., maxisport) and the maximum number of review pages to scrape in the "Set Parameters" node. API Credentials: Ensure the following credentials are configured in n8n: ScrapeGraphAI API (for web scraping) Google Gemini API (for AI sentiment analysis and report generation) Google Drive OAuth2 (for file upload) ConvertAPI (for HTML to PDF conversion) Customize Output: Optionally adjust the "Limit reviews" node to control the number of reviews processed and modify the AI agent's system prompt in "Company Reputation Management" to tailor the report format. Destination Folder: Verify the Google Drive folder ID in the "Upload file" node points to the correct destination for the generated reports. Execution: Trigger the workflow manually via the "When clicking ‘Test workflow’" node to run the complete scraping, analysis, and reporting pipeline. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Dr. Firas
Generate AI Viral Videos with VEO3 and Auto-Publish to TikTok Who is this for? This workflow is for content creators, marketers, and social media managers who want to consistently produce viral-style short videos and publish them automatically to TikTok — without manual editing or uploading. What problem is this workflow solving? / Use case Creating short-form video content that stands out takes time: ideation, scriptwriting, video generation, and publishing. This workflow automates the entire pipeline — from idea generation to TikTok upload — enabling you to scale your content strategy and focus on creativity rather than repetitive tasks. What this workflow does Generates viral video ideas** daily using GPT-5 Creates structured prompts** for before/after transformation videos Renders cinematic vertical videos** with VEO3 (9:16 format) Saves ideas and metadata** into Google Sheets for tracking Uploads videos automatically to TikTok** via Blotato integration Updates status in Google Sheets** once the video is live The result: a fully automated daily viral video publishing system. Setup Google Sheets Connect your Google Sheets account. Create a sheet with columns for idea, caption, environment, sound, production, and final_output. OpenAI Add your OpenAI API credentials (for GPT-5 mini / GPT-4.1 mini). VEO3 (Kie API) Set up your API key in the HTTP Request node (Generate Video with VEO3). Blotato Connect your Blotato account for TikTok publishing. Schedule Trigger Adjust the Start Daily Content Generation node to fit your preferred posting frequency. How to customize this workflow to your needs Platforms**: Extend publishing to YouTube Shorts or Instagram Reels by duplicating the TikTok step. Frequency**: Change the Schedule Trigger to post multiple times per day or only a few times per week. Creative Style**: Modify the system prompts to align with your brand’s style (cinematic, minimalist, neon, etc.). Tracking**: Enhance the Google Sheets logging with engagement metrics by pulling TikTok analytics via Blotato. This workflow helps you build a hands-free AI-powered content engine, turning raw ideas into published viral videos every day. 📄 🎥 Watch This Tutorial: Step by Step 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube
by SpaGreen Creative
WhatsApp Number Verify & Confirmation System with Rapiwa API and Google Sheets Who is this for? This n8n workflow makes it easy to verify WhatsApp numbers submitted through a form. When someone fills out the form, the automation kicks in—capturing the data via a webhook, checking the WhatsApp number using the Rapiwa API, and sending a confirmation message if the number is valid. All submissions, whether verified or not, are logged into a Google Sheet with a clear status. It’s a great solution for businesses, marketers, or developers who need a reliable way to verify leads, manage event signups, or onboard customers using WhatsApp. How it works? This n8n automation listens for form submissions via a webhook, validates the provided WhatsApp number using the Rapiwa API, sends a confirmation message if the number is verified, and then appends the submission data to a Google Sheet, marking each entry as verified or unverified. Features Webhook Trigger**: Captures form submissions via HTTP POST Data Cleaning**: Formats and sanitizes the WhatsApp number Rapiwa API Integration**: Checks if the number is registered on WhatsApp Conditional Messaging**: Sends confirmation messages only to verified WhatsApp users Google Sheets Integration**: Appends all submissions with a validity status Auto Timestamping**: Adds the submission date in YYYY-MM-DD format Throttling Support**: Built-in delay to avoid hitting API or sheet rate limits Separation of Verified/Unverified**: Distinct handling for both types of entries Nodes Used in the Workflow Webhook** Format Webhook Response Data** (Code) Loop Over Items** (Split In Batches) Cleane Number** (Code) check valid whatsapp number** (HTTP Request) If** (Conditional) Send Message Using Rapiwa** verified append row in sheet** (Google Sheets) unverified append row in sheet** (Google Sheets) Wait1** How to set up? Webhook Add a Webhook node to the canvas. Set HTTP Method to POST. Copy the Webhook URL path (/a9b6a936-e5f2-4xxxxxxxxxe0a970d5). In your frontend form or app, make a POST request to: The request body should include: { "business_name": "ABC Corp", "location": "New York", "whatsapp": "+1 234-567-8901", "email": "user@example.com", "name": "John Doe" } Format Webhook Response Data Add a Code node after the Webhook node. Use this JavaScript code: const result = $input.all().map(item => { const body = item.json.body || {}; const submitted_date = new Date().toISOString().split('T')[0]; return { business_name: body.business_name, location: body.location, whatsapp: body.whatsapp, email: body.email, name: body.name, submitted_date: submitted_date }; }); return result; Loop Over Items Insert a SplitInBatches node after the data formatting. Set the Batch Size to a reasonable number (e.g. 1 or 10). This is useful for processing multiple submissions at once, especially if your webhook receives arrays of entries. Note: If you expect only one submission at a time, it still helps future-proof your workflow. Cleane Number Add a Code node named Cleane Number. Paste the following JavaScript: const items = $input.all(); const updatedItems = items.map((item) => { const waNo = item?.json["whatsapp"]; const waNoStr = typeof waNo === 'string' ? waNo : (waNo !== undefined && waNo !== null ? String(waNo) : ""); const cleanedNumber = waNoStr.replace(/\D/g, ""); item.json["whatsapp"] = cleanedNumber; return item; }); return updatedItems; Check WhatsApp Number using Rapiwa Add an HTTP Request node. Set: Method: POST URL: https://app.rapiwa.com/api/verify-whatsapp Add authentication: Type: HTTP Bearer Credentials: Select or create Rapiwa token In Body Parameters, add: number: ={{ $json.whatsapp }} This API call checks if the WhatsApp number exists and is valid. Expected Output: { "success": true, "data": { "number": "+88017XXXXXXXX", "exists": true, "jid": "88017XXXXXXXXXXXXX", "message": "✅ Number is on WhatsApp" } } Conditional If Check Add an If node after the Rapiwa validation. Configure the condition: Left Value: ={{ $json.data.exists }} Operation: true If true → valid number → go to messaging and append as "verified". If false → go to unverified sheet directly. Note: This step branches the flow based on the WhatsApp verification result. Send WhatsApp Message (Rapiwa) Add an HTTP Request node under the TRUE branch of the If node. Set: Method: POST URL: https://app.rapiwa.com/api/send-message Authentication: Type: HTTP Bearer Use same Rapiwa token Body Parameters: number: ={{ $json.data.phone }} message_type: text message: Hi {{ $('Cleane Number').item.json.name }}, Thanks! Your form has been submitted successfully. This sends a confirmation message via WhatsApp to the verified number. Google Sheets – Verified Data Add a Google Sheets node under the TRUE branch (after the message is sent). Set: Operation: Append Document ID: Choose your connected Google Sheet Sheet Name: Set to your active sheet (e.g., Sheet1) Column Mapping: Business Name: ={{ $('Cleane Number').item.json.business_name }} Location: ={{ $('Cleane Number').item.json.location }} WhatsApp Number: ={{ $('Cleane Number').item.json.whatsapp }} Email : ={{ $('Cleane Number').item.json.email }} Name: ={{ $('Cleane Number').item.json.name }} Date: ={{ $('Cleane Number').item.json.submitted_date }} validity: verified Use OAuth2 Google Sheets credentials for access. Note: Make sure the sheet has matching column headers. Google Sheets – Unverified Data Add a Google Sheets node under the FALSE branch of the If node. Use the same settings as the verified node, but set: validity: unverified This stores entries with unverified WhatsApp numbers in the same Google Sheet. Wait Node Add a Wait node after both Google Sheets nodes. Set Wait Time: Value: 2 seconds This delay prevents API throttling and adds buffer time before processing the next item in the batch. Google Sheet Column Reference A Google Sheet formatted like this ➤ Sample Sheet | Business Name | Location | WhatsApp Number | Email | Name | validity | Date | |---------------------|--------------------|------------------|----------------------|------------------|------------|------------| | SpaGreen Creative | Dhaka, Bangladesh | 8801322827799| contact@spagreen.net | Abdul Mannan | unverified | 2025-09-14 | | SpaGreen Creative | Bagladesh | 8801322827799| contact@spagreen.net| Abdul Mannan | verified | 2025-09-14 | > Note: The Email column includes a trailing space. Ensure your column headers match exactly to prevent data misalignment. How to customize the workflow Modify confirmation message with your brand tone Add input validation for missing or malformed fields Route unverified submissions to a separate spreadsheet or alert channel Add Slack or email notifications on new verified entries Notes & Warnings Ensure your Google Sheets credential has access to the target sheet Rapiwa requires an active subscription for API access Monitor Rapiwa API limits and adjust wait time as needed Keep your webhook URL protected to avoid misuse Support & Community WhatsApp Support: Chat Now Discord: Join SpaGreen Community Facebook Group: SpaGreen Support Website: spagreen.net Developer Portfolio: Codecanyon SpaGreen
by Karol
How it works This workflow turns any URL sent to a Telegram bot into ready-to-publish social posts: Trigger: Telegram message (checks if it contains a URL). Fetch & parse: Downloads the page and extracts readable text + title. AI writing: Generates platform-specific copy (Facebook, Instagram, LinkedIn). Image: Creates an AI image and stores it in Supabase Storage. Publish: Posts to Facebook Pages, Instagram Business, LinkedIn. Logging: Updates Google Sheets with post URLs and sends a Telegram confirmation (image + links). Setup Telegram – create a bot, connect via n8n Telegram credentials. OpenAI / Gemini – add API key in n8n Credentials and select it in the AI nodes. Facebook/Instagram (Graph API) – create a credential called facebookGraph with: • accessToken (page-scoped or system user) • pageId (for Facebook Page photos) • igUserId (Instagram Business account ID) • optional fbApiVersion (default v19.0) LinkedIn – connect with OAuth2 in the LinkedIn node (leave as credential). Supabase – credential supabase with url and apiKey. Ensure a bucket exists (default used in the Set node is social-media). Google Sheets – replace YOUR_GOOGLE_SHEET_ID and Sheet1. Grant your n8n Google OAuth2 access. Notes • No API keys are stored in the template. Everything runs via n8n Credentials. • You can change bucket name, image size/quality, and AI prompts in the respective nodes. • The confirmation message on Telegram includes direct permalinks to the published posts. Required credentials • Telegram Bot • OpenAI (or Gemini) • Facebook/Instagram Graph • LinkedIn OAuth2 • Supabase (url + apiKey) • Google Sheets OAuth2 Inputs • A Telegram message that contains a URL. Outputs • Social posts published on Facebook, Instagram, LinkedIn. • Row appended/updated in Google Sheets with post URLs and image link. • Telegram confirmation with the generated image + post links.
by Pratyush Kumar Jha
Video → Newsletter AI Agent This n8n workflow converts a YouTube video into a polished, email-ready newsletter. It scrapes the transcript, extracts a thumbnail/logo and brand color theme, uses multiple AI agents to (1) clean & summarize the transcript into three newsletter sections, (2) convert that content into a styled HTML newsletter (color-aware), then saves the draft to Google Sheets and sends the email to subscribers via Gmail. The flow is optimized for batch sending and brand-consistent HTML output. How it works (step-by-step) Trigger — On form submission accepts Brand Name, Brand Website, and YouTube video link. Site scrape & colour study — HTTP requests + Information Extractor → AI agent derives brand color theme (primary/secondary/accent/background). Transcript retrieval — Two YouTube transcript scrapers (Apify acts) fetch the video transcript and thumbnail; a small Code node merges transcript chunks. Summarization & journalism — AI Agent2 (LangChain/Gemini) cleans the transcript, extracts thesis + key points, and writes 3 newsletter sections in a journalistic tone. HTML conversion — Convert Newsletter to HTML (AI) agent applies the fixed layout and injects only text color variables (keeps layout intact) and outputs Subject + HTML body (≤1000 words). Aggregate & merge — Merge + Aggregate assemble files, assets, and parsed outputs. Save & send — Save the email draft to Google Sheets (Save Newsletter Draft in Google Sheet) and loop through subscribers from a subscribers sheet; Sending Emails to all the Subscribers (Gmail node) sends the HTML to each address in batches. Batching & looping — Split In Batches handles large subscriber lists; Loop Over Items triggers the HTML-conversion per recipient batch. Quick Setup Guide 👉 Demo & Setup Video 👉 Sheet Template 👉 Course Nodes of interest On form submission (formTrigger) — entry point for video + brand inputs. You Tube Transcript Scraper, You Tube Transcript Scraper1 (HTTP Request → Apify) — transcript + thumbnail fetching. Information Extractor & AI Agent1 — website color/theme extraction. Code in JavaScript — merges transcript pieces into a single text payload. AI Agent2 (LangChain agent + Gemini Chat Model) — transcript → journalist-style newsletter sections. Convert Newsletter to HTML (AI) (LangChain agent + Structured Output Parser) — builds constrained, brand-aware HTML email and subject. Structured Output Parser1/2 — enforce schemas for color theme / structured outputs. Get row(s) in sheet & Save Newsletter Draft in Google Sheet (Google Sheets) — subscriber list + draft storage. Loop Over Items / Split In Batches — batch processing for sends. Sending Emails to all the Subscribers (Gmail) — SMTP/OAuth send. OpenRouter Chat Model — LM compute provider configured in the workflow. What you’ll need (credentials & resources) Google Sheets OAuth2 (for reading subscribers & saving drafts). Gmail OAuth2 (for sending HTML emails). Gemini / LLM provider credentials (Gemini API key or equivalent) for the LangChain agents. Apify API key (for the YouTube transcript scrapers). ConvertAPI (or similar) key if you convert logos (SVG→PNG) server-side. Host storage / publicly accessible URLs for images (thumbnails, logos) or a file-store (S3). Optional: SendGrid / Mailgun credentials if you swap Gmail for a transactional email provider. Security note: do NOT hardcode credentials in node parameters; use n8n credentials manager or environment variables. Recommended settings & best practices Batch size & rate-limits:** set Split In Batches to a conservative batch size (e.g., 50–200) and add delays between batches to avoid provider rate limits and Gmail throttling. Retries & timeouts:** enable retries for HTTP Request nodes and set sensible timeouts (e.g., 30–60s). Use exponential backoff. LM controls:** set token/response length limits and max_output_tokens (or equivalent) to avoid runaway costs; enforce the 1000-word HTML hard limit in the prompt. Validation:** validate the YouTube URL and that transcript content exists before invoking AI summarization (fail fast with a clear error). Schema enforcement:** use Structured Output Parser nodes with strict JSON schemas to prevent malformed outputs. Testing:** run with a small subscriber test sheet and use a safety test Gmail account before sending to production lists. Logging & monitoring:** log each run (video URL, subject, send count, errors) to a monitoring sheet or external logging service. Privacy & compliance:** ensure recipients have consent to receive emails (store opt-ins); include unsubscribe handling if you move beyond one-off sends. Comply with CAN-SPAM / local laws. Credential rotation:** rotate API keys periodically and revoke compromised tokens. Content safety:** instruct the LM agents to avoid hallucinated citations — only include links you can verify. Customization ideas Multi-language support: auto-detect video language and run summarizer in that language. A/B subject testing: generate 2–3 subject lines and send variations to subsets. Scheduling: add a scheduler node to delay sends or publish at optimal send-times per recipient timezone. Integrate with SendGrid/Mailgun for higher throughput and analytics (opens/clicks). Add personalization tokens (first name, company) from subscribers sheet to the HTML (merge fields). Auto-attach transcript as plain-text footer or include “Read more” link to a hosted full article. Add analytics: record opens, clicks, and engagement back into Google Sheets or a database. Support other platforms: ingest videos from Vimeo, Loom, or uploaded MP4s. Use a templating engine to allow multiple newsletter layouts and style variants. Auto-generate social posts (Twitter/X, LinkedIn) from the newsletter summary. Tags n8n newsletter youtube transcript langchain gemini apify gmail google-sheets html-email automation batching ai content-ops
by WeblineIndia
Automated Social Media Lead Processing with AI Summaries, Slack Alerts & Jira Ticketing This workflow automatically collects new lead messages from social media platforms, LinkedIn or web forms, filters relevant marketing inquiries using keywords, classifies and summarizes the lead with AI, logs it to Google Sheets, creates a Jira task and sends Slack notifications. Additionally, it generates weekly lead reports for team insights. It reduces manual triage, ensures no valid inquiry is missed and keeps your team updated with both immediate notifications and summary reports. Quick Start – Implementation Steps Connect your webhook to your social media inbox, LinkedIn, Twitter or web form. Add your OpenAI, Google Sheets, Jira and Slack credentials. Enable the workflow. Send a test message to confirm Google Sheets logging, Slack notification and Jira task creation. Activate the scheduler for weekly reports to track lead performance. What It Does This workflow performs the following key tasks: Filters incoming messages for marketing-related keywords like ad request, promo request, collaboration, partnership or social media inquiry. Uses OpenAI GPT to classify the lead into categories such as Sales, Support, Partnership, Influencer Inquiry or General Lead. Generates a short AI summary of the message. Logs structured lead data to Google Sheets, including username, source, category, summary and timestamp. Creates a Jira task automatically with summary, description, category and received time. Sends a Slack notification to alert the team instantly. Runs a scheduled workflow that aggregates weekly leads and sends a weekly report to Slack. This ensures a structured, automated pipeline for capturing, summarizing and assigning leads efficiently. Who’s It For Marketing and sales teams managing leads from social media and web forms. Agencies handling client campaigns and inquiries. Businesses that want automated notifications and ticketing. Teams using Slack and Jira for daily operations. Requirements to Use This Workflow n8n account or self-hosted instance. Webhook-enabled social media inbox or lead form. OpenAI API Key. Slack Bot Token with channel posting permission. Jira Software Cloud API credentials. Google Sheets credentials. Predefined keyword list for filtering messages. How It Works & Setup Steps 1. Get DM (Webhook Trigger) Receives new messages from social media or web forms and starts the workflow. 2. Lead Keyword Filter (Code Node) Filters incoming messages for predefined marketing keywords and removes irrelevant or spam messages. 3. AI Lead Classifier (OpenAI Node) Classifies the lead into categories (Sales, Support, Partnership, Influencer Inquiry, General Lead) and generates a one-line summary using GPT-4.1. 4. AI Output Parser (Code Node) Parses AI JSON output and merges it with original message data, adding timestamp and structured fields. 5. Store Lead (Google Sheets Node) Logs structured lead data to Google Sheets including username, source, category, summary and timestamp. 6. Create Task (Jira Node) Automatically creates a Jira story or task in your selected project with the AI summary, category and timestamp. 7. Send a Summary (Slack Node) Sends a formatted message to your selected Slack channel, alerting your team of the new lead. 8. Weekly Reporting Schedule Trigger** – triggers the weekly reporting workflow. Extract Lead Data** – fetches all logged leads from Google Sheets. Weekly Lead Filter** – filters data to include leads from the last week. Report Data Formatter** – calculates total leads, category counts, source counts and example leads. Weekly Report Slack** – sends a formatted weekly lead summary to Slack. How to Customize Nodes Keyword Filter Add or remove keywords in the JavaScript code to match your specific lead types or campaigns. AI Classification Update the OpenAI prompt for different summary lengths, tones, or lead categories. Google Sheets Logging Map additional columns like email, phone or campaign source as needed. Jira Fields Customize summary, description, labels, priority or assignees based on your project requirements. Slack Message Format Modify emojis, line breaks and formatting to suit your team’s Slack notifications. Add-Ons (Extend the Workflow) Send email alerts for high-priority leads. Trigger WhatsApp replies using an API provider. Integrate with CRMs like HubSpot, Zoho or Salesforce. Add sentiment analysis to detect frustrated or VIP users. Automate daily or weekly analytics reports to Slack. Use Case Examples Collecting Instagram, LinkedIn and Twitter DMs and logging them to Google Sheets. Creating automated Jira tickets for marketing inquiries. Sending instant Slack notifications for new leads. Filtering out irrelevant messages and only processing valid marketing leads. Generating weekly lead summary reports for team review. Troubleshooting Guide | Issue | Possible Cause | Solution | |-------|----------------|----------| | No leads appearing | Webhook not receiving messages | Check webhook URL and ensure messages are sent correctly | | AI summary empty | OpenAI API key invalid or model limit reached | Regenerate API key / check usage | | Jira task not created | Missing required Jira fields or incorrect project ID | Add required fields or update Jira project settings | | Slack message not sent | Wrong channel ID or missing permissions | Reconnect Slack credentials | | Filter passes 0 items | Keywords do not match | Update or expand keyword list in filter node | Need Help? If you need assistance setting up this workflow, customizing nodes, building add-ons or automating more processes, our n8n workflow development team at WeblineIndia is happy to help. We can guide you through integrations, scaling or building end-to-end automation systems tailored to your business.
by Oneclick AI Squad
AI Customer Call Analyzer — Voice → Insights → CRM with GPT-4 Converts raw sales call recordings into structured CRM intelligence. Uploads audio → transcribes via Whisper → GPT-4 extracts intent, sentiment, objections, next steps → updates CRM and sends a structured summary to the sales team. How it works Upload Call Recording - Webhook receives audio file upload (mp3, wav, m4a) from sales rep portal Validate & Prepare Audio - Checks file type, size limits, extracts call metadata Transcribe via Whisper - Sends audio to OpenAI Whisper API for high-accuracy transcription Wait — Transcription Buffer - Holds until transcription is confirmed complete GPT-4 Call Intelligence - Extracts intent, sentiment, objections, buying signals, action items MCP Context Enrichment - Pulls CRM history and enriches analysis with account context Update CRM Record - Writes structured insights back to CRM (HubSpot / Salesforce) Send Sales Summary - Emails rep and manager with call scorecard and next steps Audit Log - Records all processing steps for compliance and coaching Setup Steps Import this workflow into n8n Configure credentials: OpenAI API - For Whisper transcription and GPT-4 analysis HubSpot / Salesforce - CRM update target Google Sheets - Audit log and call registry SMTP / Gmail - Sales summary delivery Set your CRM API endpoint and field mapping in the update node Configure your sales team email list in the notify node Activate the workflow Sample Upload Payload { "callId": "CALL-20250222-0042", "repEmail": "jane.smith@company.com", "repName": "Jane Smith", "contactEmail": "buyer@prospect.com", "contactName": "Bob Johnson", "companyName": "Acme Corp", "dealStage": "negotiation", "callDurationSecs": 1847, "audioUrl": "https://storage.company.com/calls/call-0042.mp3" } Features Whisper-powered transcription** with speaker diarization hints GPT-4 intent and sentiment** extraction with confidence scores Objection and buying signal** detection Auto CRM field mapping** — no manual data entry Sales scorecard** with talk ratio, next step clarity, deal risk Full audit trail** for call coaching and compliance Explore More LinkedIn & Social Automation: Contact us to design AI-powered lead nurturing, content engagement, and multi-platform reply workflows tailored to your growth strategy.