by BluePro
This workflow monitors targeted subreddits for potential sales leads using Reddit’s API, AI content analysis, Supabase, and Google Sheets. It is built specifically to discover posts from Reddit users who may benefit from a particular product or service. It can be easily customized for any market. 🔍 Features Targeted Subreddit Monitoring:** Searches multiple niche subreddits like smallbusiness, startup, sweatystartup, etc., using relevant keywords. AI-Powered Relevance Scoring:** Uses OpenAI GPT to analyze each post and determine if it’s written by someone who might benefit from your product, returning a simple “yes” or “no.” Duplicate Lead Filtering with Supabase:** Ensures you don’t email the same lead more than once by storing already-processed Reddit post IDs in a Supabase table. Content Filtering:** Filters out posts with no body text or no upvotes to ensure only high-quality content is processed. Lead Storage in Google Sheets:** Saves qualified leads into a connected Google Sheet with key data (URL, post content, subreddit, and timestamp). Email Digest Alerts:** Compiles relevant leads and sends a daily digest of matched posts to your team’s inbox for review or outreach. Manual or Scheduled Trigger:** Can be manually triggered or automatically scheduled (via the built-in Schedule Trigger node). ⚙️ Tech Stack Reddit API** – For post discovery OpenAI Chat Model** – For AI-based relevance filtering Supabase** – For lead de-duplication Google Sheets** – For storing lead details Gmail API** – For sending email alerts 🔧 Customization Tips Adjust Audience**: Modify the subreddits and keywords in the initial Code node to match your market. Change the AI Prompt**: Customize the prompt in the “Analysis Content by AI” node to describe your product or service. Search Comments Instead**: To monitor comments instead of posts, change type=link to type=comment in the Reddit Search node. Change Email Recipients**: Edit the Gmail node to direct leads to a different email address or format.
by Trung Tran
Automating AWS S3 Operations with n8n: Buckets, Folders, and Files Watch the demo video below: This tutorial walks you through setting up an automated workflow that generates AI-powered images from prompts and securely stores them in AWS S3. It leverages the new AI Tool Node and OpenAI models for prompt-to-image generation. Who’s it for This workflow is ideal for: Designers & marketers** who need quick, on-demand AI-generated visuals. Developers & automation builders* exploring *AI-driven workflows** integrated with cloud storage. Educators or trainers** creating tutorials or exercises on AI image generation. Businesses* looking to automate *image content pipelines** with AWS S3 storage. How it works / What it does Trigger: The workflow starts manually when you click “Execute Workflow”. Edit Fields: You can provide input fields such as image description, resolution, or naming convention. Create AWS S3 Bucket: Automatically creates a new S3 bucket if it doesn’t exist. Create a Folder: Inside the bucket, a folder is created to organize generated images. Prompt Generation Agent: An AI agent generates or refines the image prompt using the OpenAI Chat Model. Generate an Image: The refined prompt is used to generate an image using AI. Upload File to S3: The generated image is uploaded to the AWS S3 bucket for secure storage. This workflow showcases how to combine AI + Cloud Storage seamlessly in an automated pipeline. How to set up Import the workflow into n8n. Configure the following credentials: AWS S3 (Access Key, Secret Key, Region). OpenAI API Key (for Chat + Image models). Update the Edit Fields node with your preferred input fields (e.g., image size, description). Execute the workflow and test by entering a sample image prompt (e.g., “Futuristic city skyline in watercolor style”). Check your AWS S3 bucket to verify the uploaded image. Requirements n8n** (latest version with AI Tool Node support). AWS account** with S3 permissions to create buckets and upload files. OpenAI API key** (for prompt refinement and image generation). Basic familiarity with AWS S3 structure (buckets, folders, objects). How to customize the workflow Custom Buckets**: Replace the auto-create step with an existing S3 bucket. Image Variations**: Generate multiple image variations per prompt by looping the image generation step. File Naming**: Adjust file naming conventions (e.g., timestamp, user input). Metadata**: Add metadata such as tags, categories, or owner info when uploading to S3. Alternative Storage: Swap AWS S3 with **Google Cloud Storage, Azure Blob, or Dropbox. Trigger Options: Replace manual trigger with **Webhook, Form Submission, or Scheduler for automation. ✅ This workflow is a hands-on example of how to combine AI prompt engineering, image generation, and cloud storage automation into a single streamlined process.
by Mirai
Icebreaker Generator powered with ChatGPT This n8n template crawls a company website, distills the content with AI, and produces a short, personalized icebreaker you can drop straight into your cold emails or CRM. Perfect for SDRs, founders, and agencies who want “real research” at scale. Good to know Works from a Google Sheet of leads (domain + LinkedIn, etc.). Handles common scrape failures gracefully and marks the lead’s Status as Error. Uses ChatGPT to summarize pages and craft one concise, non-generic opener. Output is written back to the same Google Sheet (IceBreaker, Status). You’ll need Google credentials (for Sheets) and OpenAI credentials (for GPT). How it works Step 1 — Discover internal pages Reads a lead’s website from Google Sheets. Scrapes the home page and extracts all links. A Code node cleans the list (removes emails/anchors/social/external domains, normalizes paths, de-duplicates) and returns unique internal URLs. If the home page is unreachable or no links are found, the lead is marked Error and the workflow moves on. Step 2 — Convert pages to text Visits each collected URL and converts the response into HTML/Markdown text for analysis. You can cap depth/amount with the Limit node. Step 3 — Summarize & generate the icebreaker A GPT node produces a two-paragraph abstract for each page (JSON output). An Aggregate node merges all abstracts for the company. Another GPT node turns the merged summary into a personalized, multi-line icebreaker (spartan tone, non-obvious details). The result is written back to Google Sheets (IceBreaker = ..., Status = Done). The workflow loops to the next lead. How to use Prepare your sheet Include at least: organization_website_url, linkedin_url, and any other lead fields you track. Keep an empty IceBreaker and Status column for the workflow to fill. Connect credentials Google Sheets: use the Google account that owns the sheet and link it in the nodes. OpenAI: add your API key to the GPT nodes (“Summarize Website Page”, “Generate Multiline Icebreaker”). Run the workflow Start with the Manual Trigger (or replace with a schedule/webhook). Adjust Limit if you want fewer/more pages per company. Watch Status (Done/Error) and IceBreaker populate in your sheet. Requirements n8n instance Google Sheets account & access to the leads sheet OpenAI API key (for summarization + icebreaker generation) Customizing this workflow Tone & format: tweak the prompts (both GPT nodes) to match your brand voice and structure. Depth: change the Limit node to scan more/less pages; add simple rules to prioritize certain paths (e.g., /about, /blog/*). Fields: write additional outputs (e.g., Company Summary, Key Products, Recent News) back to new sheet columns. Lead selection: filter rows by Status = "" (or custom flags) to only process untouched leads. Error handling: expand the Error branch to retry with www./HTTP→HTTPS or to log diagnostics in a separate tab. Tips Keep icebreakers short, specific, and free of clichés—small, non-obvious details from the site convert best. Start with a small batch to validate quality, then scale up. Consider adding a rate limit if target sites throttle requests. In short: Sheet → crawl internal pages → AI abstracts → single tailored icebreaker → write back to the sheet, then repeat for the next lead. This automation can work great with our automation for automated cold emailing.
by Harsh Agrawal
Automated SEO Intelligence Platform with DataForSEO and Claude Transform any company website into a detailed SEO audit report in minutes! This workflow combines real-time web scraping, comprehensive SEO data analysis, and advanced AI reasoning to deliver client-ready reports automatically. Perfect for digital agencies scaling their audit services, freelance SEO consultants automating research, or SaaS teams analyzing competitor strategies before sales calls. The Process Discovery Phase: Input a company name and website URL to kick things off. The system begins with website content extraction. Intelligence Gathering: A dedicated scraper sub-workflow extracts all website content and converts it to structured markdown. Strategic Analysis: LLMs process the scraped content to understand the business model, target market, and competitive positioning. They generate business research insights and product strategy recommendations tailored to that specific company. Once this analysis completes, DataForSEO API then pulls technical metrics, backlink profiles, keyword rankings, and site health indicators. Report Assembly: All findings flow into a master report generator that structures the data into sections covering technical SEO, content strategy, competitive landscape, and actionable next steps. Custom branded cover and closing pages are added. Delivery: The HTML report converts to PDF format and emails directly to your recipient - no manual intervention needed. Setup Steps Add API credentials: OpenRouter (for AI), DataForSEO (for scraping/SEO data), and PDFco (for PDF generation) Configure email sending through your preferred service (Gmail, SendGrid, etc.) Optional: Upload custom first/last page PDFs for white-label branding Test with your own website first to see the magic happen! Customize It Adjust analysis depth: Modify the AI prompts to focus on specific SEO aspects (local SEO, e-commerce, B2B SaaS, etc.) Change report style: Edit the HTML template in the Sample_Code node for different formatting Add integrations: Connect to your CRM to automatically trigger reports when leads enter your pipeline Scale it up: Process multiple URLs in batch by feeding a Google Sheet of prospects What You'll Need OpenRouter account (Claude Opus 4.1 recommended for best insights) DataForSEO subscription (handles both scraping and SEO metrics) PDFco account (converts your reports to professional PDFs) Email service credentials configured in n8n Need Help? Connect with me on LinkedIn if you have any doubt
by Shahzaib Anwar
📌 Overview This workflow automatically processes incoming Shopify/Gmail leads and pushes them into HubSpot as both Contacts and Deals. It helps sales and marketing teams capture leads instantly, enrich CRM data, and avoid missed opportunities. ⚡ How it works Trigger: Watches for new emails in Gmail. Extract Data: Parses email body (Name, Email, City, Phone, Message, Product URL/Title). Condition: Checks if sender is Shopify before processing. HubSpot: Creates/updates a Contact with customer details. Creates a Deal associated with that contact. 🎯 Benefits 📥 Automates lead capture → CRM 🚫 Eliminates manual copy-paste from Gmail 🔄 Real-time sync between Gmail and HubSpot 📈 Improves sales follow-up speed and accuracy 🛠 Setup Steps Import this workflow into your n8n instance. Connect your Gmail and HubSpot credentials. Replace the HubSpot Deal Stage ID with your own pipeline stage. (Optional) Adjust the Code Node regex if your email format differs. Activate the workflow and test with a sample lead email. 📝 Example Email Format Name: John Doe Email: john@example.com City: London Phone: +44 7000 000000 Body: Interested in product Product Url: https://example.com/product Product Title: Sample Product sticky_notes: name: Gmail Trigger note: > 📧 Watches for new emails in Gmail. Polls every minute and passes email data into the flow. name: Get a Message note: > 📩 Fetches the full Gmail message content (body + metadata) for parsing. name: Extract From Email note: > 🔍 Extracts the sender’s email address from Gmail to identify the source. name: If Sender is Shopify note: > ✅ Condition node that ensures only Shopify-originated emails/leads are processed. name: Code Node (Regex Parser) note: > 🧾 Parses the email body using regex to extract Name, Email, City, Phone, Message, Product URL, and Title. name: Edit Fields (Set Node) note: > 📝 Cleans and structures the extracted fields into proper JSON format before sending to HubSpot. name: HubSpot → Create/Update Contact note: > 👤 Creates or updates a HubSpot Contact with the extracted lead details. name: HubSpot → Create Deal note: > 💼 Creates a HubSpot Deal linked to the Contact, including campaign/product information.
by vinci-king-01
Error Alert Aggregator – Email and Jira This workflow aggregates error logs arriving from multiple sources, deduplicates identical events within a configurable time-window, and sends a single consolidated notification via Email and Jira. It prevents alert fatigue by batching similar errors and guarantees that responsible teams are informed through both channels. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted ≥ v1.0 or n8n.cloud account) Basic understanding of your log source’s payload structure SMTP server or n8n Email credentials configured Jira Cloud or Jira Server account with API access Required Credentials Email (SMTP/IMAP or n8n Email node credential)** — to dispatch alert emails Jira** — Create issues automatically in the chosen project HTTP Request Auth (optional)** — If your log endpoint requires authentication Specific Setup Requirements | Setting | Recommended Value | Notes | |-----------------------------|----------------------------------------|-----------------------------------------------------------| | Batch window (Wait node) | 10 minutes | Time allowed to collect & deduplicate errors | | Deduplication key (Code) | error_id or message field | Choose a unique attribute representing the same incident | | Email recipients | Security & DevOps distribution list | Use semicolons for multiple addresses | | Jira project key | SEC | Project where alert tickets should be filed | How it works This workflow aggregates error logs arriving from multiple sources, deduplicates identical events within a configurable time-window, and sends a single consolidated notification via Email and Jira. It prevents alert fatigue by batching similar errors and guarantees that responsible teams are informed through both channels. Key Steps: Schedule Trigger**: Runs every X minutes to poll/collect new log items. HTTP Request**: Pulls error events from your monitoring or log system. IF Node**: Quickly filters out non-error or resolved events. Code Node (Deduplicator)**: Hashes & stores unique error signatures, skipping already-seen items. Wait Node**: Holds processing for the batching period (e.g., 10 min). Merge Node**: Combines all unique errors gathered during the window. Set Node**: Formats the consolidated message for Email & Jira. Email Send**: Dispatches the summary email. Jira Node**: Creates (or updates) an issue with the same summary. Sticky Notes**: Provide inline documentation right inside the workflow for easier maintenance. Set up steps Setup Time: 15-20 minutes Import template: Download the JSON template and drag & drop it into your n8n editor. Configure Schedule Trigger: Set polling interval (e.g., every 5 minutes). HTTP Request Node: Enter the URL of your log endpoint. Add authentication if required. Adjust IF filter: Modify the condition to match your log’s error severity field (status === "error"). Customize Code Node: Replace error_id with the field that uniquely identifies an error. Optionally tweak deduplication TTL. Wait Node: Set the batch time (e.g., 600 seconds). Set Node: Edit the email subject/body and Jira issue summary/description placeholders. Credentials: Add or select your Email credential in Email Send. Add or select your Jira credential in Jira node. Test run the workflow to verify that: Duplicate events are collapsed. Email and Jira tickets show combined information. Activate the workflow to start production monitoring. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Initiates workflow on a fixed interval. HTTP Request** – Retrieves fresh error logs from an external API. IF** – Only lets true error events proceed. Code (Deduplicator)** – Uses JavaScript to remove already-known errors via n8n static data. Wait** – Creates a batching window for aggregation. Merge (Queue mode)** – Joins events accumulated during the wait. Set** – Crafts a human-readable report for Email & Jira. Email Send** – Dispatches the consolidated message to stakeholders. Jira** – Opens/updates an issue containing the same error digest. Sticky Note** – Provides inline explanations for future maintainers. Data Flow: Schedule Trigger → HTTP Request → IF → Code Code → Wait → Merge → Set Set → Email Send & Jira Customization Examples Change Deduplication Strategy // Code Node snippet // Use error 'stacktrace' + 'service' for uniqueness const signature = ${item.json.stacktrace}_${item.json.service}; if ($workflow.staticData.signatureCache?.includes(signature)) { // duplicate, skip return []; } $workflow.staticData.signatureCache = [ ...( $workflow.staticData.signatureCache || [] ), signature ]; return item; Update Existing Jira Issue Instead of Creating New // Jira Node settings // Search for an open ticket with the same summary // If found, add a comment instead of creating { "operation": "comment", "issueKey": "={{$node['Set'].json['jiraIssueKey']}}", "comment": "New occurrences: {{$json.errorCount}}" } Data Output Format The workflow outputs structured JSON data: { "errors": [ { "id": "ERR123", "message": "Database timeout", "count": 5, "firstSeen": "2024-03-14T10:12:00Z", "lastSeen": "2024-03-14T10:22:00Z" } ], "emailStatus": "success", "jiraStatus": "issue_created" } Troubleshooting Common Issues No data returned from HTTP Request – Verify endpoint URL, authentication headers, and that your monitoring tool actually has recent error events. Duplicate alerts still coming through – Increase the Wait node’s batching window or refine the deduplication key in the Code node. Performance Tips Cache HTTP responses if the log API supports it to reduce bandwidth. Use selective fields in the HTTP Request’s query parameters to limit payload size. Pro Tips: Store a rolling hash list in external Redis or DB for large-scale deduplication. Add a second IF branch to auto-resolve Jira tickets when an error disappears for X hours. Use Slack or Microsoft Teams nodes in parallel to broaden alert coverage. This is a community-contributed n8n workflow template provided “as-is.” Thoroughly test in a non-production environment before deploying to production.
by Jitesh Dugar
Automate your social media marketing by instantly promoting new Shopify products. This workflow polls your store for new arrivals, generates AI-powered captions, and publishes them across Instagram, Facebook, and Twitter/X—while maintaining a deduplication log in Airtable. 🎯 What This Workflow Does This workflow acts as a 24/7 social media manager, ensuring every new product gets immediate visibility without manual effort. ⏱️ Step 1 — Poll, Fetch & Deduplicate Schedule Trigger:** Polls Shopify every 10 minutes for products published in the last 15 minutes Deduplication Logic:** Checks Airtable (ProductPostLog) to avoid reposting the same product 🖼️ Step 2 — Data Enrichment & Media Hosting Normalization:** Cleans and maps product fields (title, price, vendor) CDN Hosting:** Downloads product image and uploads via UploadToURL to generate a public HTTPS URL 🤖 Step 3 — Platform-Specific AI Captions Instagram:** Storytelling style, emojis, 10+ hashtags Facebook:** Conversational, CTA-driven, link preview optimized Twitter/X:** Short-form (under 260 chars), includes price and link 📤 Step 4 — Multi-Platform Publishing & Logging Instagram:** Create container → publish via Graph API Facebook:** Post image + caption to Page Twitter/X:** Publish tweet via native node Airtable Log:** Store status, timestamp, and image URL for tracking ✨ Key Features Intelligent Deduplication:** Prevents duplicate posts using Airtable log Sequential Processing:** Handles products one-by-one to avoid rate limits Custom AI Tones:** Tailored captions for each platform Reliable Media Hosting:** UploadToURL ensures valid public image URLs 🔧 Setup Requirements Required Integrations Shopify:** Admin API with read_products scope Airtable:** Base with ProductPostLog table OpenAI:** API credentials for caption generation Facebook & Instagram:** Page token with posting permissions Twitter/X:** OAuth1 credentials UploadToURL:** CDN hosting for images Environment Variables SHOPIFY_STORE_DOMAIN SHOPIFY_ACCESS_TOKEN IG_ACCOUNT_ID FB_ACCESS_TOKEN FB_PAGE_ID Ready to launch? Import this template and connect your Shopify store to turn every new product into a viral social media event!
by Khairul Muhtadin
Decodo Amazon Product Recommender delivers instant, AI-powered shopping recommendations directly through Telegram. Send any product name and receive Amazon product analysis featuring price comparisons, ratings, sales data, and categorized recommendations (budget, premium, best value) in under 40 seconds—eliminating hours of manual research. Why Use This Workflow? Time Savings: Reduce product research from 45+ minutes to under 30 seconds Decision Quality: Compare 20+ products automatically with AI-curated recommendations Zero Manual Work: Complete automation from message input to formatted recommendations Ideal For E-commerce Entrepreneurs:** Quickly research competitor products, pricing strategies, and market trends for inventory decisions Smart Shoppers & Deal Hunters:** Get instant product comparisons with sales volume data and discount tracking before purchasing Product Managers & Researchers:** Analyze Amazon marketplace positioning, customer sentiment, and pricing ranges for competitive intelligence How It Works Trigger: User sends product name via Telegram (e.g., "iPhone 15 Pro Max case") AI Validation: Gemini 2.5 Flash extracts core product keywords and validates input authenticity Data Collection: Decodo API scrapes Amazon search results, extracting prices, ratings, reviews, sales volume, and product URLs Processing: JavaScript node cleans data, removes duplicates, calculates value scores, and categorizes products (top picks, budget, premium, best value, most popular) Intelligence Layer: AI generates personalized recommendations with Telegram-optimized markdown formatting, shortened product names, and clean Amazon URLs Output & Delivery: Formatted recommendations sent to user with categorized options and direct purchase links Error Handling: Admin notifications via separate Telegram channel for workflow monitoring Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Decodo Account | Essential | Amazon product data scraping | | Telegram Bot Token | Essential | Chat interface for user interactions | | Google Gemini API | Essential | AI-powered product validation and recommendations | | Telegram Account | Optional | Admin error notifications | Installation Steps Import the JSON file to your n8n instance Configure credentials: Decodo API: Sign up at decodo.com → Dashboard → Scraping APIs → Web Advanced → Copy BASIC AUTH TOKEN Telegram Bot: Message @BotFather on Telegram → /newbot → Copy HTTP API token (format: 123456789:ABCdefGHI...) Google Gemini: Obtain API key from Google AI Studio for Gemini 2.5 Flash model Update environment-specific values: Replace YOUR-CHAT-ID in "Notify Admin" node with your Telegram chat ID for error notifications Verify Telegram webhook IDs are properly configured Customize settings: Adjust AI prompt in "Generate Recommendations" node for different output formats Set character limits (default: 2500) for Telegram message length Test execution: Send test message to your Telegram bot: "iPhone 15 Pro" Verify processing status messages appear Confirm recommendations arrive with properly formatted links Customization Options Basic Adjustments: Character Limit**: Modify 2500 in AI prompt to adjust response length (Telegram max: 4096) Advanced Enhancements: Multi-language Support**: Add language detection and translation nodes for international users Price Tracking**: Integrate Google Sheets to log historical prices and trigger alerts on drops Image Support**: Enable Telegram photo messages with product images from scraping results Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | "No product detected" for valid inputs | AI validation too strict or ambiguous query | Add specific product details (model number, brand) in user input | | Empty recommendations returned | Decodo API rate limit or Amazon blocking | Wait 60 seconds between requests; verify Decodo account status | | Telegram message formatting broken | Special characters in product names | Ensure Telegram markdown mode is set to "Markdown" (legacy) not "MarkdownV2" | Use Case Examples Scenario 1: E-commerce Store Owner Challenge: Needs to quickly assess competitor pricing and product positioning for new inventory decisions without spending hours browsing Amazon Solution: Sends "wireless earbuds" to bot, receives categorized analysis of 20+ products with price ranges ($15-$250), top sellers, and discount opportunities Result: Identifies $35-$50 price gap in market, sources comparable product, achieves 40% profit margin Scenario 2: Smart Shopping Enthusiast Challenge: Wants to buy a laptop backpack but overwhelmed by 200+ Amazon options with varying prices and unclear value propositions Solution: Messages "laptop backpack" to bot, gets AI recommendations sorted by budget ($30), premium ($50+), best value (highest discount + good ratings), and most popular (by sales volume) Result: Purchases "Best Value" recommendation with 35% discount, saves $18 and 45 minutes of research time Created by: Khaisa Studio Category: AI | Productivity | E-commerce | Tags: amazon, telegram, ai, product-research, shopping, automation, gemini Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by Yaron Been
Description This workflow automatically scans companies for signs of financial distress across filings, insolvency registers, and financial news. It helps procurement, credit, and risk teams detect early warning signals before a supplier or partner defaults. Overview This workflow uses Bright Data to scrape financial filings, insolvency registers, and news sources for distress signals like bankruptcy, restructuring, or payment defaults. AI classifies the type and severity of distress, applies probability weighting and confidence guardrails, then generates structured business decisions — including: Supplier Monitoring risk status Onboarding Approval recommendations Portfolio Exposure classifications All outputs are logged into Google Sheets for tracking and auditability. Tools Used n8n**: Automation platform orchestrating the workflow Bright Data**: Scrapes filings, insolvency registers, and financial news without getting blocked OpenRouter**: AI-powered distress classification, risk scoring, and business decision generation Google Sheets**: Logs supplier risk status, onboarding decisions, portfolio exposure, and errors How to Install 1. Import the Workflow Download the .json file and import it into your n8n instance. 2. Configure Bright Data Add your Bright Data API credentials to all Bright Data nodes. 3. Configure OpenRouter Add your OpenRouter API key for AI distress classification and decision generation. 4. Set Up Google Sheets Create a spreadsheet following the "Google Sheets Setup" sticky note inside the workflow. Connect each Google Sheets node to your document. 5. Customize Edit the configuration node to define: Target company Country Risk indicators Monitoring scope Use Cases Procurement Teams Monitor supplier financial health and get alerts before disruptions hit your supply chain. Credit Risk Analysts Screen new vendors or partners for bankruptcy signals and insolvency red flags. Onboarding Workflows Automate go/no-go decisions for new supplier or partner approvals. Portfolio Managers Track financial exposure across your vendor or investment portfolio. Finance Teams Detect early signs of distress in key business relationships before they become critical. Connect with Me Website: https://www.nofluff.online YouTube: https://www.youtube.com/@YaronBeen/videos LinkedIn: https://www.linkedin.com/in/yaronbeen/ Get Bright Data: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) Tags #n8n #automation #brightdata #webscraping #creditrisk #financialdistress #riskmanagement #suppliermonitoring #supplychainrisk #insolvency #bankruptcy #duediligence #vendorscreening #portfoliorisk #financialanalysis #n8nworkflow #workflow #nocode #businessintelligence #riskassessment #creditanalysis #procurementautomation #supplierrisk #financialmonitoring #earlywarning
by vinci-king-01
How it works This workflow automatically analyzes website visitors in real-time, enriches their data with company intelligence, and provides lead scoring and sales alerts. Key Steps Webhook Trigger - Receives visitor data from your website tracking system. AI-Powered Company Intelligence - Uses ScrapeGraphAI to extract comprehensive company information from visitor domains. Visitor Enrichment - Combines visitor behavior data with company intelligence to create detailed visitor profiles. Lead Scoring - Automatically scores leads based on company size, industry, engagement, and intent signals. CRM Integration - Updates your CRM with enriched visitor data and lead scores. Sales Alerts - Sends real-time notifications to your sales team for high-priority leads. Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for company intelligence gathering. Set up HubSpot connection - Connect your HubSpot CRM to automatically update contact records. Configure Slack integration - Set up your Slack workspace and specify the sales alert channel. Customize lead scoring criteria - Adjust the scoring algorithm to match your target customer profile. Set up website tracking - Configure your website to send visitor data to the webhook endpoint. Test the workflow - Verify all integrations are working correctly with a test visitor. Key Features Real-time visitor analysis** with company intelligence enrichment Automated lead scoring** based on multiple factors (company size, industry, engagement) Intent signal detection** (pricing interest, demo requests, contact intent) Priority-based sales alerts** with recommended actions CRM integration** for seamless lead management Deal size estimation** based on company characteristics
by Oneclick AI Squad
This workflow orchestrates three specialized AI agents in sequence to research trends, generate platform-specific content (captions + images), and schedule posts across LinkedIn, X (Twitter), and Instagram. Who's it for • Social media managers handling multiple brands • Marketing teams needing consistent content pipelines • Creators automating their content calendar • Agencies managing high-volume posting schedules How it works A topic/industry keyword is submitted via webhook or schedule Agent 1 (Research): Searches current trends, extracts insights & hashtags Agent 2 (Content Gen): Produces platform-specific captions + image prompts Agent 3 (Scheduling): Posts to LinkedIn, X, and Instagram via their APIs All activity is logged to a Google Sheet tracker How to set up Import this workflow into n8n Set credentials: OpenAI, LinkedIn OAuth2, Twitter OAuth2, Instagram Graph API, Google Sheets OAuth2, SendGrid (optional alerts) Replace YOUR_SHEET_ID placeholder in the Google Sheet node Replace YOUR_INSTAGRAM_ACCOUNT_ID, YOUR_LINKEDIN_PERSON_URN placeholders Activate the workflow Requirements • OpenAI API key (GPT-4.1 mini or better) • LinkedIn OAuth2 credentials • Twitter/X OAuth2 credentials • Instagram Graph API credentials (Business account) • Google Sheets OAuth2 • Optional: Stable Diffusion / DALL-E for image generation How to customize • Swap OpenAI model in all AI nodes • Adjust character limits per platform in Agent 2 prompt • Modify posting schedule in the Schedule Trigger node • Add Slack/email notifications on post confirmation • Extend the Google Sheet columns for additional metadata
by Davide
This workflow automates the process of creating short video clips from a YouTube video based on specific content requested by the user. Tis is a complete AI-powered video clipping and distribution system, turning any YouTube video into ready-to-publish short-form content automatically IMPORTANT: This workflow is quite complex, it requires integrating multiple APIs including YouTube Transcript, RapidAPI for video download, OpenAI GPT, Fal.run for video processing, Google Drive, FTP, and various social media platforms (TikTok, YouTube, Instagram via Postiz and Upload-Post). But despite the complexity, the core idea is brilliantly simple and elegant: you provide a YouTube video and a prompt describing what you're looking for, and the workflow automatically finds the exact moment in the video, extracts that segment, and publishes it as a short clip across all your social channels. Key Benefits 1. ✅ Fully Automated Content Repurposing No manual editing is required. The workflow automatically extracts and creates clips from long-form content. 2. ✅ AI-Powered Precision The use of an LLM ensures that the exact relevant moment in the video is identified based on meaning, not just keywords. 3. ✅ Multi-Platform Distribution The workflow doesn’t just create clips — it also publishes them across multiple platforms (TikTok, YouTube, Instagram), saving time and effort. 4. ✅ Scalability You can process multiple videos and prompts, making it ideal for: Content creators Agencies Social media automation 5. ✅ Time Efficiency What normally takes 30–60 minutes of manual editing is reduced to a fully automated flow that runs in minutes. 6. ✅ Modular & Extendable Each step (transcript, AI analysis, trimming, publishing) is modular, so you can: Replace APIs Add subtitles Add captions or branding Integrate with other tools 7. ✅ Centralized Media Management Files are automatically stored in: Google Drive CDN (FTP) This ensures easy access and distribution. 8. ✅ Error Handling & Validation The workflow checks if the content is found in the transcript and avoids unnecessary processing if not. How it works This workflow automates the process of creating a short video clip from a YouTube video based on a specific user prompt. It follows these steps: Input & Transcript Retrieval: The user provides a YouTube Video ID and a search prompt. The workflow first calls the youtube-transcript.io API to fetch the video's transcript. AI-Powered Time Extraction: The transcript and the user's prompt are sent to an OpenAI GPT model (gpt-4.1-mini) with a structured prompt. The LLM analyzes the transcript to find the exact segment matching the prompt and outputs a JSON object containing the start_time, end_time, and duration of the matching clip. Video Download & Hosting: Simultaneously, the workflow downloads the full video file using a RapidAPI YouTube downloader. It then uploads this file to an FTP server (BunnyCDN) to generate a public, accessible URL. Clip Generation (Fal.run): Using the video_url (from the FTP server) and the timecodes from the AI, the workflow sends a request to a Fal.run workflow utility (trim-video) to process and create the short video clip. It then polls the status endpoint until the processing is completed. Clip Distribution: Once the clip is ready, the workflow downloads the final video file and uploads it to multiple destinations: Storage: Google Drive and an FTP server (BunnyCDN). Social Media: Directly to TikTok, YouTube, and Instagram (via Postiz API) using the upload-post.com and Postiz APIs. Set up steps To get this workflow running, you need to configure several external API keys and credentials: API Keys & Credentials: RapidAPI: Obtain an API key for the "Youtube Video Fast Downloader 24-7" API. Insert this key in the "Download Video" node's header parameters. Youtube Transcript API: Get an API key from youtube-transcript.io and configure it in the "Generate transcript" node's HTTP Header Auth. OpenAI: Add your OpenAI API key in the "OpenAI Chat Model" node credentials. Fal.run: Add your Fal.run API key in the "Video Dubbing" and "Get final video url" nodes under HTTP Header Auth. Storage & Hosting: Google Drive: Set up OAuth2 credentials for Google Drive. The node is configured to upload to a specific folder ID (ensure this folder exists or update the ID). FTP BunnyCDN: Configure FTP credentials (host, username, password) in the "Upload to FTP" and "Upload to FTP server" nodes. The paths are set to /n3wstorage/test/. Update these paths and credentials to match your server. Social Media Integration (Optional): Postiz (Instagram): Set up credentials for Postiz in the "Upload to Instagram" node and update the integrationId and content fields with your specific Instagram account details. Upload-Post.com (TikTok & Youtube): Obtain an API key for upload-post.com and configure it in the "Upload to TikTok" and "Upload to Youtube" nodes. You must also update the title, user, and platform[] parameters (currently placeholders like SET_TITLE and YOUR_USERNAME) with your actual account data. Workflow Variables: Ensure the "Edit Fields" node contains the correct VIDEO ID and PROMPT for testing. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.