by Constantine Kissel
Generate research-backed article with n8n Who’s it for Content marketers, SEO teams, and founders who need fast, research-grounded blog posts or long-form articles—multi-language included. Works well for teams that want citations, outlines, and section-by-section drafting with minimal manual effort. How it works / What it does Use a Form to collects domain, keywords, and target language. The workflow refines keywords, finds recent articles and authoritative citations, then synthesizes a master outline and loops through each section: generate search queries → fetch web results → summarize findings → write the section with the advanced model. Finally, it aggregates all sections into a clean Markdown article. Optional delivery nodes (Email, Telegram) and an AI Agent are included but disabled by default.  How to set up Import the workflow JSON into n8n. Add your OpenAI credential Set simple_model / advanced_model in LLM Params Requirements n8n instance with outbound internet access. OpenAI API access (Responses API + web_search_preview). (Optional) Email/Telegram credentials if you want to deliver results async How to customize the workflow Edit prompts in LLM Params and Section Prompts to match tone, structure, and SEO style Tweak recency and source rules in Search Articles / Search Citations.  Insert a human review step before “Write Section”, enable delivery nodes Change working/output languages in the Language node
by Mark Shcherbakov
Video Guide I prepared a detailed guide that shows the whole process of building an AI tool to analyze Instagram Reels using n8n. Youtube Link Who is this for? This workflow is ideal for social media analysts, digital marketers, and content creators who want to leverage data-driven insights from their Instagram Reels. It's particularly useful for those looking to automate the analysis of video performance to inform strategy and content creation. What problem does this workflow solve? Analyzing video performance on Instagram can be tedious and time-consuming, requiring multiple steps and data extraction. This workflow automates the process of fetching, analyzing, and recording insights from Instagram Reels, making it simpler for users to track engagement metrics without manual intervention. What this workflow does This workflow integrates several services to analyze Instagram Reels, allowing users to: Automatically fetch recent Reels from specified creators. Analyze the most-watched videos for insights. Store and manage data in Airtable for easy access and reporting. Initial Trigger: The process begins with a manual trigger that can later be modified for scheduled automation. Data Retrieval: It connects to Airtable to fetch a list of creators and their respective Instagram Reels. Video Analysis: It handles the fetching, downloading, and uploading of videos for analysis using an external service, simplifying performance tracking through a structured query process. Record Management: It saves relevant metrics and insights into Airtable, ensuring that users can access and organize their video analytics effectively. Setup Create accounts: Set up Airtable, Edify, n8n, and Gemini accounts. Prepare triggers and modules: Replace credentials in each node accordingly. Configure data flow: Ensure modules are set to fetch and analyze the correct data fields as outlined in the guide. Test the workflow: Run the scenario manually to confirm that data is fetched and analyzed correctly.
by Max
N8N Automated Twitter Reply Bot Workflow For latest version, check: dziura.online/automation Latest documentation can be find here You must have Apify community node installed before pasting the JSON to your workflow. Overview This n8n workflow creates an intelligent Twitter/X reply bot that automatically scrapes tweets based on keywords or communities, analyzes them using AI, generates contextually appropriate replies, and posts them while avoiding duplicates. The bot operates on a schedule with intelligent timing and retry mechanisms. Key Features Automated tweet scraping** from Twitter/X using Apify actors AI-powered reply generation** using LLM (Large Language Model) Duplicate prevention** via MongoDB storage Smart scheduling** with timezone awareness and natural posting patterns Retry mechanism** with failure tracking Telegram notifications** for status updates Manual trigger** option via Telegram command Required Credentials & Setup 1\. Telegram Bot Create a bot via @BotFather on Telegram Get your Telegram chat ID to receive status messages Credential needed**: Telegram account (Bot token) 2\. MongoDB Database Set up a MongoDB database to store replied tweets and prevent duplicates Create a collection (default name: collection\_name) Credential needed**: MongoDB account (Connection string) Tutorial**: MongoDB Connection Guide 3\. Apify Account Sign up at Apify.com Primary actors used**: Search Actor: api-ninja/x-twitter-advanced-search - For keyword-based tweet scraping (ID: 0oVSlMlAX47R2EyoP) Community Actor: api-ninja/x-twitter-community-search-post-scraper - For community-based tweet scraping (ID: upbwCMnBATzmzcaNu) Credential needed**: Apify account (API token) 4\. OpenRouter (LLM Provider) Sign up at OpenRouter.ai Used for AI-powered tweet analysis and reply generation Model used**: x-ai/grok-3 (configurable) Credential needed**: OpenRouter account (API key) 5\. Twitter/X API Set up developer account at developer.x.com Note**: Free tier limited to ~17 posts per day Credential needed**: X account (OAuth2 credentials) Workflow Components Trigger Nodes 1\. Schedule Trigger Purpose**: Runs automatically every 20 minutes Smart timing**: Only active between 7 AM - 11:59 PM (configurable timezone) Randomization**: Built-in probability control (~28% execution chance) to mimic natural posting patterns 2\. Manual Trigger Purpose**: Manual execution for testing 3\. Telegram Trigger Purpose**: Manual execution via /reply command in Telegram Usage**: Send /reply to your bot to trigger the workflow manually Data Processing Flow 1\. MongoDB Query (Find documents) Purpose**: Retrieves previously replied tweet IDs to avoid duplicates Collection**: collection\_name (configure to match your setup) Projection**: Only fetches tweet\_id field for efficiency 2\. Data Aggregation (Aggregate1) Purpose**: Consolidates tweet IDs into a single array for filtering 3\. Keyword/Community Selection (Keyword/Community List) Purpose**: Defines search terms and communities Configuration**: Edit the JSON to include your keywords and Twitter community IDs Format:{ "keyword\_community\_list": \[ "SaaS", "Entrepreneur", "1488663855127535616" // Community ID (19-digit number) \], "failure": 0 } 4\. Random Selection (Randomized community, keyword) Purpose**: Randomly selects one item from the list to ensure variety 5\. Routing Logic (If4) Purpose**: Determines whether to use Community search or Keyword search Logic**: Uses regex to detect 19-digit community IDs vs keywords Tweet Scraping (Apify Actors) Community Search Actor Actor**: api-ninja/x-twitter-community-search-post-scraper Purpose**: Scrapes tweets from specific Twitter communities Configuration:{ "communityIds": \["COMMUNITY\_ID"\], "numberOfTweets": 40 } Search Actor Actor**: api-ninja/x-twitter-advanced-search Purpose**: Scrapes tweets based on keywords Configuration:{ "contentLanguage": "en", "engagementMinLikes": 10, "engagementMinReplies": 5, "numberOfTweets": 20, "query": "KEYWORD", "timeWithinTime": "2d", "tweetTypes": \["original"\], "usersBlueVerifiedOnly": true } Filtering System (Community filter) The workflow applies multiple filters to ensure high-quality replies: Text length**: >60 characters (substantial content) Follower count**: >100 followers (audience reach) Engagement**: >10 likes, >3 replies (proven engagement) Language**: English only Views**: >100 views (visibility) Duplicate check**: Not previously replied to Recency**: Within 2 days (configurable in actor settings) AI-Powered Reply Generation LLM Chain (Basic LLM Chain) Purpose**: Analyzes filtered tweets and generates contextually appropriate replies Model**: Grok-3 via OpenRouter (configurable) Features**: Engagement potential scoring User authority analysis Timing optimization Multiple reply styles (witty, informative, supportive, etc.) <100 character limit for optimal engagement Output Parser (Structured Output Parser) Purpose**: Ensures consistent JSON output format Schema:{ "selected\_tweet\_id": "tweet\_id\_here", "screen\_name": "author\_screen\_name", "reply": "generated\_reply\_here" } Posting & Notification System Twitter Posting (Create Tweet) Purpose**: Posts the generated reply as a Twitter response Error handling**: Catches API limitations and rate limits Status Notifications Success**: Notifies via Telegram with tweet link and reply text Failure**: Notifies about API limitations or errors Format**: HTML-formatted messages with clickable links Database Storage (Insert documents) Purpose**: Saves successful replies to prevent future duplicates Fields stored**: tweet\_id, screen\_name, reply, tweet\_url, timestamp Retry Mechanism The workflow includes intelligent retry logic: Failure Counter (If5, Increment Failure Counter1) Logic**: If no suitable tweets found, increment failure counter Retry limit**: Maximum 3 retries with different random keywords Wait time**: 3-second delay between retries Final Failure Notification Trigger**: After 4 failed attempts Action**: Sends Telegram notification about unsuccessful search Recovery**: Manual retry available via /reply command Configuration Guide Essential Settings to Modify MongoDB Collection Name: Update collection\_name in MongoDB nodes Telegram Chat ID: Replace 11111111111 with your actual chat ID Keywords/Communities: Edit the list in Keyword/Community List node Timezone: Update timezone in Code node (currently set to Europe/Kyiv) Actor Selection: Enable only one actor (Community OR Search) based on your needs Filter Customization Adjust filters in Community filter node based on your requirements: Minimum engagement thresholds Text length requirements Time windows Language preferences LLM Customization Modify the AI prompt in Basic LLM Chain to: Change reply style and tone Adjust engagement criteria Modify scoring algorithms Set different character limits Usage Tips Start small: Begin with a few high-quality keywords/communities Monitor performance: Use Telegram notifications to track success rates Adjust filters: Fine-tune based on the quality of generated replies Respect limits: Twitter's free tier allows ~17 posts/day Test manually: Use /reply command for testing before scheduling Troubleshooting Common Issues No tweets found: Adjust filter criteria or check keywords API rate limits: Reduce posting frequency or upgrade Twitter API plan MongoDB connection: Verify connection string and collection name Apify quota: Monitor Apify usage limits LLM failures: Check OpenRouter credits and model availability Best Practices Monitor your bot's replies for quality and appropriateness Regularly update keywords to stay relevant Keep an eye on engagement metrics Adjust timing based on your audience's activity patterns Maintain a balanced posting frequency to avoid appearing spammy Documentation Links Full Documentation**: Google Doc Guide Latest Version**: dziura.online/automation MongoDB Setup Tutorial**: YouTube Guide This workflow provides a comprehensive solution for automated, intelligent Twitter engagement while maintaining quality and avoiding spam-like behavior.
by Incrementors
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 📦 Multi-Platform Price Finder: Scraping Prices with Bright Data & Telegram An intelligent n8n automation that fetches real-time product prices from marketplaces like Amazon, Wayfair, Lowe's, and more using Bright Data's dataset, and sends promotional messages via Telegram using AI—perfect for price tracking, deal alerts, and affiliate monetization. 📋 Overview This automation tracks product prices across top e-commerce platforms using Bright Data and sends out alerts via Telegram based on the best available deals. The workflow is designed for affiliate marketers, resellers, and deal-hunting platforms who want real-time competitive pricing. ✨ Key Features 🔎 Multi-Platform Scraping: Supports Amazon, Wayfair, Lowe's, and more ⚡ Bright Data Integration: Access to structured product snapshots 📢 AI-Powered Alerts: Generates Telegram-ready promo messages using AI 🧠 Lowest Price Logic: Filters and compares products across sources 📈 Data Merge & Processing: Combines multiple sources into a single stream 🔄 Keyword-Driven Search: Searches using dynamic keywords from form input 📦 Scalable Design: Built for multiple platform processing simultaneously 🧼 Clean Output: Strips unnecessary formatting before publishing 🎯 What This Workflow Does Input Search Keywords**: User-defined keyword(s) from a form trigger Platform Sources**: Wayfair, Lowe's, Amazon, etc. Bright Data API Key**: Needed for authenticated scraping Processing Steps User Input via n8n form trigger (keyword-based) Bright Data API Trigger for each marketplace Status Polling: Wait until scraping snapshot is ready Data Retrieval: Fetches JSON results from Bright Data snapshot Data Cleaning & Normalization: Price, title, and URL are extracted Merging Products from all platforms Find Lowest Price Product using custom JS logic AI Prompt Generation via Claude/Anthropic Telegram Formatting and alert message creation Output 🛍️ Product Title 💰 Final Price 🔗 Product URL ✉️ Promotional Message (for Telegram/notifications) 🚀 Setup Instructions Step 1: Import Workflow Open n8n > Workflows > + Add Workflow Import the provided JSON file Step 2: Configure Bright Data Add credentials under Credentials → Bright Data API Set the appropriate dataset_id for each platform Ensure dataset includes title, price, and url fields Step 3: Enable Keyword Trigger Use the built-in Form Trigger node Input: Single keyword field (SearchHere) Step 4: Telegram or AI Integration Modify prompt node for your language or tone Add Telegram webhook or integration where needed 📖 Usage Guide Adding Keywords Trigger the form with a product keyword like iPhone 15 Wait for workflow to fetch best deals and generate Telegram message Understanding AI-Powered Output AI creates a short, engaging message like: > "🔥 Deal Alert: Get the iPhone 15 for just ₹74,999! Limited stock—Check it out: [link]" Debugging Output Output node shows cleaned JSON with title, price, url, and message If no valid results, debug message is returned with sample structure info 🔧 Customization Options Add More Marketplaces Clone any HTTP Request node (e.g., for Wayfair) Update dataset_id and required output fields Modify Price Logic Update the Code1 node to change comparison (e.g., highest price instead of lowest) Change Message Format Edit the AI Agent prompt to customize tone/language Add emoji, CTAs, or markdown formatting as needed 🧪 Test & Activation Add a few sample keywords via form trigger Run manually or set as a webhook for external app input Check final AI-generated message in output node 🚨 Troubleshooting | Issue | Solution | |-------|----------| | No Data Returned | Ensure keyword matches real products | | Status Not 'Ready' | Bright Data delay; add Wait nodes | | Invalid API Key | Check Bright Data credentials | | AI Errors | Adjust prompt or validate input fields | 📊 Use Cases 💰 Affiliate Campaigns: Show best deals across platforms 🛒 Deal Pages: Post live offers with product links 🧠 Competitor Analysis: Track cross-platform pricing 🔔 Alert Bots: Send real-time alerts to Telegram or Slack ✅ Quick Setup Checklist [x] Bright Data API credentials configured [x] n8n form trigger enabled [x] Claude or AI model connected [x] All HTTP requests working [x] AI message formatting verified 🌐 Example Output { "title": "Apple iPhone 15 Pro Max", "price": 1199, "url": "https://amazon.com/iphone-15", "message": "🔥 Grab the Apple iPhone 15 Pro Max for just $1199! Limited deal—Check it out: https://amazon.com/iphone-15" } 📬For any questions or support, please contact: 📧 <info@incrementors.com> or fill out this form: https://www.incrementors.com/contact-us/
by Jimleuk
This n8n workflow assists property managers and surveyors by reducing the time and effort it takes to complete property inventory surveys. In such surveys, articles and goods within a property may need to be captured and reported as a matter of record. This can take a sizable amount of time if the property or number of items is big enough. Our solution is to delegate this task to a capable AI Agent who can identify and fill out the details of each item automatically. How it works An AirTable Base is used to capture just the image of an item within the property Our workflow monitoring this AirTable Base sends the photo to an AI image recognition model to describe the item for purpose of identification. Our AI agent uses this description and the help of Google's reverse image search in an attempt to find an online product page for the item. If found, the product page is scraped for the item's specifications which are then used to fill out the rest of the details of the item in our Airtable. Requirements Airtable for capturing photos and product information OpenAI account to for image recognition service and AI for agent SerpAPI account for google reverse image search. Firecrawl.dev account for webspacing. Customising this workflow Try building an internal inventory database to query and integrate into the workflow. This could save on costs by avoiding fetching new each time for common items.
by Niranjan G
🛡️ Automated AWS Key Compromise Remediation Description This n8n workflow provides a secure, enterprise-grade response system for AWS IAM access key compromises with built-in form submission and human approval mechanisms. When an AWS access key is suspected to be compromised, this workflow enables rapid containment through a secure web form interface with basic authentication, human approval via Slack, and automated damage prevention through immediate key deactivation, credential invalidation, and comprehensive security reporting. How This Workflow is Useful Secure Form-Based Response Authenticated Form Submission**: Secure web form with basic authentication for capturing compromise details Human Approval Workflow**: Slack-based approval system for sensitive security operations Rapid Key Deactivation**: Instantly disables compromised access keys after approval Credential Invalidation**: Creates and applies security policies to invalidate existing temporary credentials Policy Analysis**: Automatically scans and analyzes both inline and attached IAM policies for the affected user AI-Powered Reporting**: Generates detailed security reports with intelligent analysis and team notifications Business Value Reduces Mean Time to Response (MTTR)**: Automates manual security procedures that typically take hours Minimizes Security Exposure**: Immediate containment prevents potential data breaches and unauthorized resource access Ensures Compliance**: Provides audit trails and documentation required for security compliance frameworks Cost Prevention**: Prevents potential financial damage from compromised credentials being used maliciously Rapid Response Capability**: Streamlines security response procedures when incidents are detected Technical Benefits AWS Best Practices**: Implements official AWS security recommendations for key compromise response Scalable Architecture**: Handles multiple access keys and complex IAM policy structures Error Handling**: Robust error handling ensures workflow continues even if individual steps fail Audit Trail**: Complete logging of all actions taken during the incident response Integration Ready**: Easily integrates with existing security tools and notification systems Use Cases 1. Incident Response Automation Automated response to security alerts from AWS CloudTrail Integration with SIEM systems for immediate key compromise response 24/7 security monitoring and automated containment 2. Compliance and Audit Meeting regulatory requirements for incident response documentation Providing audit trails for security compliance frameworks (SOC 2, ISO 27001, PCI DSS) Demonstrating due diligence in security incident handling 3. Multi-Account Management Centralized security response across multiple AWS accounts Consistent incident response procedures across different environments Standardized security automation for enterprise AWS deployments 4. Security Training and Testing Security team training on AWS incident response procedures Tabletop exercises and security drills Testing and validation of security response capabilities Key Features Core Functionality ✅ Secure Form Interface: Web form with basic authentication for secure data submission ✅ Human Approval Gate: Slack-based approval workflow for sensitive operations ✅ Authenticated Data Processing: Secure handling of form submissions with validation ✅ Immediate Key Deactivation: Instant disabling of compromised credentials after approval ✅ Security Policy Generation: Automatic creation and attachment of credential invalidation policies ✅ Policy Analysis: Deep analysis of user permissions and attached policies ✅ AI Security Analysis: Intelligent security report generation with risk assessment ✅ Team Notifications: Real-time Slack notifications to security teams ✅ Comprehensive Logging: Complete audit trail of all response actions Technical Specifications Secure Form Interface**: Web form with basic authentication for secure data capture Human Approval System**: Slack-based approval workflow for sensitive operations AWS API Integration**: Direct integration with AWS IAM APIs Authentication Layer**: Basic auth protection for form submissions Error Handling**: Robust error handling with continuation on non-critical failures Scalable Processing**: Handles multiple policies and complex IAM structures Security Best Practices**: No hardcoded credentials, uses AWS credential management Modular Design**: Easy to customize and extend for specific organizational needs Prerequisites Required Credentials AWS Credentials** with IAM permissions for: ListAccessKeys, UpdateAccessKey ListUserPolicies, ListAttachedUserPolicies CreatePolicy, AttachUserPolicy GetPolicy, GetPolicyVersion, GetUserPolicy Required Integrations Slack Workspace** for approval workflow and team notifications Basic Authentication Setup** for secure form access Optional Integrations AI Language Model** (Claude/OpenAI) for intelligent security analysis and report generation Installation and Setup Import the workflow into your n8n instance Configure AWS credentials in n8n credential manager Set up basic authentication for the secure form interface Configure Slack integration for approval notifications and team alerts Set up AI model (optional) for enhanced security analysis and reporting Configure approval workflow in Slack for human oversight Test in development environment before production use Workflow Inputs Secure Form Submission This workflow uses a secure web form with basic authentication to capture compromise details: Username**: The AWS IAM username of the compromised account Access Key ID**: The specific access key ID that has been compromised Authentication & Approval Process Form Authentication: Basic authentication protects the submission form Data Processing: Secure handling and validation of submitted credentials Human Approval: Slack notification sent to security team for approval Automated Execution: Upon approval, the workflow executes the security response This multi-layered approach ensures that sensitive security operations require both authentication and human oversight before execution. 🚀 Automate with Slack Integration Want to fully automate and simplify this workflow? Connect it with Slack for seamless team collaboration and instant response capabilities! Interactive Slack Automation Combine this AWS Key Compromise Response workflow with our Interactive Slack Approval & Data Submission System to create a fully automated incident response pipeline: Instant Slack Alerts**: Receive immediate notifications when key compromises are detected One-Click Response**: Trigger the AWS response workflow directly from Slack with interactive buttons Team Collaboration**: Enable security teams to respond collectively through Slack channels Approval Workflows**: Add human approval gates before executing critical security actions Real-time Updates**: Get live status updates and completion notifications in Slack How the Complete Solution Works Detection: External security monitoring tools (CloudTrail, SIEM, etc.) detect potential key compromise Secure Form Access: Security team accesses the authenticated web form to submit compromise details Form Submission: Credentials are securely submitted through the basic auth-protected form Human Approval: Slack notification sent to security team for review and approval Approved Execution: Upon approval, the AWS security response executes automatically Real-time Updates: Progress and completion notifications sent back to Slack Security Analysis: AI-powered analysis and comprehensive reporting delivered to the team Get Started with Full Automation To enable automatic notifications and complete the automation pipeline, use the Interactive Slack Approval & Data Submission System with Webhooks workflow: https://n8n.io/workflows/5049-interactive-slack-approval-and-data-submission-system-with-webhooks/ This integration transforms manual security responses into streamlined, team-collaborative automation that reduces response time from hours to minutes. Security Considerations Form Authentication**: Basic authentication protects the submission interface Human Approval Gate**: Slack-based approval prevents unauthorized execution AWS Credential Management**: Uses AWS credential management best practices No Sensitive Data Storage**: No sensitive data stored in workflow configuration Least-Privilege Access**: Implements least-privilege access principles Complete Audit Trails**: Provides complete audit trails for compliance Secure Data Processing**: Encrypted handling of form submissions and approvals Immediate Damage Prevention**: Designed for rapid containment after approval ⚠️ Important Disclaimer Use with Caution: Disabling access keys without proper understanding can significantly impact your personal or business operations. This workflow immediately deactivates AWS access keys, which may disrupt running applications, automated processes, or services that depend on these credentials. AWS Best Practices Recommendation: Use IAM Roles instead of Access Keys** whenever possible for enhanced security IAM roles provide temporary credentials and eliminate the need for long-term access keys Follow the principle of least privilege when assigning permissions Regularly rotate and audit your AWS credentials Implement proper monitoring and alerting for credential usage Before Using This Workflow: Ensure you understand which services and applications use the target access key Have a rollback plan in case of accidental disruption Test in a non-production environment first Coordinate with your team before executing in production For comprehensive AWS security best practices, refer to the AWS Security Best Practices Guide. For more workflows and automation solutions, visit: https://n8n.io/creators/niranjan/
by Jonas
🎧 Daily RSS Digest & Podcast Generation This workflow automates the creation of a daily sports podcast from your favorite news sources. It fetches articles, uses AI to write a digest and a two-person dialogue, and produces a single, merged audio file with KOKORO TTS ready for listening. ✨ How it works: 📰 Fetch & Filter Daily News: The workflow triggers daily, fetches articles from your chosen RSS feeds, and filters them to keep only the most recent content. ✍️ Generate AI Digest & Script: Using Google Gemini, it first creates a written summary of the day's news. A second AI agent then transforms this news into an engaging, conversational podcast script between two distinct AI speakers. 🗣️ Generate Voices in Chunks: The script is split into individual lines of dialogue. The workflow then loops through each line, calling a Text-to-Speech (TTS) API to generate a separate audio file (an MP3 chunk) for each part of the conversation. 🎛️ Merge Audio with FFmpeg: After all the audio chunks are created and saved locally, a command-line script generates a list of all the files and uses FFmpeg to losslessly merge them into a single, seamless MP3 file. All temporary files are then deleted. 📤 Send the Final Podcast: The final, merged MP3 is read from the server and delivered directly to your Telegram chat with a dynamic, dated filename. You can modify: 📰 The RSS Feeds to any news source you want. 🤖 The AI Prompts to change the tone, language, or style of the digest and podcast. 🎙️ The TTS Voices used for the two speakers. 📫 The Final Delivery Method (e.g., send to Discord, save to Google Drive, etc.). Perfect for creating a personalized, hands-free news briefing to listen to on your commute. Inspired by: https://n8n.io/workflows/6523-convert-newsletters-into-ai-podcasts-with-gpt-4o-mini-and-elevenlabs/
by Anna Bui
🎯 LinkedIn ICP Lead Qualification Automation Automatically identify and qualify ideal customer prospects from LinkedIn post reactions using AI-powered profile analysis and intelligent data enrichment. Perfect for sales teams and marketing professionals who want to convert LinkedIn engagement into qualified leads without manual research. This workflow transforms post reactions into actionable prospect data with AI-driven ICP classification. Good to know LinkedIn Safety**: Only use cookie-free Apify actors to avoid account detection and suspension risks Daily Processing Limits**: Scrape maximum 1 page of reactions per day (50-100 profiles) to stay under LinkedIn's radar Apify actors cost approximately $0.01-0.05 per profile scraped - budget accordingly for daily processing Includes intelligent rate limiting to prevent API restrictions and maintain LinkedIn account safety AI classification requires clear definition of your Ideal Customer Profile criteria Processing too many profiles or running too frequently will trigger LinkedIn's anti-scraping measures Always monitor your LinkedIn account health and Apify usage patterns for any warning signs How it works Scrapes LinkedIn post reactions using Apify's specialized actor to identify engaged users Extracts and cleans profile data including names, job titles, and LinkedIn URLs Checks against existing Airtable records to prevent duplicate processing and save costs Creates new prospect records with basic information for tracking purposes Enriches profiles with comprehensive LinkedIn data including company details and experience Aggregates and formats profile data for AI analysis and classification Uses AI to analyze prospects against your ICP criteria with detailed reasoning Updates records with ICP classification results and extracted email addresses Implements smart batching and delays to respect API rate limits throughout the process How to use IMPORTANT**: Select cookie-free Apify actors only to avoid LinkedIn account suspension Set up Apify API credentials in both HTTP Request nodes for safe LinkedIn scraping Configure Airtable OAuth2 authentication and select your prospect tracking base Replace the LinkedIn post URL with your target post in the initial scraper node Daily Usage**: Process only 1 page of reactions per day (typically 50-100 profiles) maximum Customize the AI classification prompt with your specific ICP criteria and job titles Test with a small batch first to verify setup and monitor both API costs and LinkedIn account health Schedule workflow to run daily rather than processing large batches to maintain account safety Requirements Apify account with API access and sufficient credits for profile scraping Airtable account with OAuth2 authentication configured OpenAI or compatible AI model credentials for prospect classification LinkedIn post URL with reactions to analyze (minimum 10+ reactions recommended) Clear definition of your Ideal Customer Profile criteria for accurate AI classification Customising this workflow Safety First**: Always verify Apify actors are cookie-free before configuring to protect your LinkedIn account Modify ICP classification criteria in the AI prompt to match your specific target customer profile Set up daily scheduling (not hourly/frequent) to respect LinkedIn's usage patterns and avoid detection Adjust rate limiting delays based on your comfort level with LinkedIn scraping frequency Add additional data fields to Airtable schema for storing custom prospect information Integrate with CRM systems like HubSpot or Salesforce for automatic lead import Set up Slack notifications for new qualified prospects or daily summary reports Create email marketing sequences in tools like Mailchimp for nurturing qualified leads Add lead scoring based on company size, industry, or engagement level for prioritization Consider rotating between different LinkedIn posts to diversify your prospect sources while maintaining daily limits
by Hunyao
Note: This workflow assumes you already have your product’s Amazon reviews saved in a Google Sheet. If you still need those reviews, run my Amazon Reviews Scraper workflow first, then plug the resulting spreadsheet into this template. What it does Transforms any draft Google Doc into multiple high-converting sales pages. It blends Alex Hormozi’s value-stacking tactics with persona targeting based on Maslow’s Hierarchy of Needs, using your own customer reviews for proof and voice of customer (VOC). Perfect for • Growth and creative strategists • Freelance copywriters and agencies • Founders sharpening offers and funnels Apps used Google Sheets, Google Docs, LangChain OpenRouter LLM How it works Form Trigger collects Drive folder IDs, base copy URL and options. Workflow fetches the draft copy and product feature doc. It samples reviews, extracts VOC insights and maps them to Maslow needs. LLM drafts headlines and hooks following Hormozi’s \$100M Offers principles. Personas drive tone, objections and urgency in each copy variant. Loop writes one Google Doc per variant in your chosen folder. Customer analysis docs are saved to a second folder for reuse. Setup Share two Drive folders, copy the IDs (text after folders/). Paste each ID into Customer Analysis Folder ID and Advertorial Copy Folder ID. Provide File Name, Base copy (Google Docs URL) and Product Feature/USPs Doc. Optional: Reviews Sheet URL, Number of reviews to use, Target City. Set Number of Copies you need (1–20). Add Google Docs OAuth2 and Google Sheets OAuth2 credentials in n8n. If you have any questions in running the workflow, feel free to reach out to me at my youtube channel: https://www.youtube.com/@lifeofhunyao
by Zacharia Kimotho
N8n recently introduced folders and it has been a big improvement on workflow management on top of the tags. This means the current workflows need to be moved manually to the folders. The simplest idea to try is to convert the current tags into folders and move all the current workflows within the respective tags into the folders This assumes the tag name will be used as the folder name. To Note For workflows that use more than 1 tag, the workflow will be assigned the last tag that runs as the folder. How does it work I took the liberty of simplifying the setup of this workflow that will be needed on your part and also be beginner-friendly Copy and paste this workflow into your n8n canvas. You must have existing workflows and tags before you can run this Set your n8n login details on the node set Credentials with the n8n URL, username, and password. Setup your n8n API credentials on the n8n node get workflows Run the workflow. This opens up a form where you can select the number of tags to move and click on submit The workflow responds with the successful number of workflows that were imported Read more about the template Built by Zacharia Kimotho - Imperol
by Hybroht
Source Discovery - Automatically Search More Up-to-Date Information Sources 🎬 Overview Version : 1.0 This workflow utilizes various nodes to discover and analyze potential sources of information from platforms like Google, Reddit, GitHub, Bluesky, and others. It is designed to streamline the process of finding relevant sources based on specified search themes. ✨ Features Automated source discovery from multiple platforms. Filtering of existing and undesired sources. Error handling for API requests. User-friendly configuration options. 👤 Who is this for? This workflow is ideal for researchers, content marketers, journalists, and anyone looking to efficiently gather and analyze information from various online sources. 💡 What problem does this solve? This workflow addresses the challenge of manually searching for relevant information sources, saving time and effort while ensuring that users have access to the most pertinent content. Ideal use-cases include: Resource Compilation for Academic and Educational Purposes Journalism and Research Content Marketing Competitor Analysis 🔍 What this workflow does The workflow gathers data from selected platforms through search terms. It filters out known and undesired sources, analyzes the content, and provides insights into potential sources relevant to the user's needs. 🔄 Workflow Steps 1. Search Queries Fetch sources using SerpAPI search, DuckDuckGo, and Bluesky. Utilizes GitHub repositories to find relevant links. Leverages RSS feeds from subreddits to identify potential sources. 2. Filtering Step Removes existing and undesired sources from the results. 3. Source Selection Analyzes the content of the identified sources for relevance. 📌 Expected Input / Configuration The workflow is primarily configured via the Configure Workflow Args (Manual) node or the Global Variables custom node. Search themes: Keywords or phrases relevant to the desired content. Lists of known sources and undesired sources for filtering. 📦 Expected Output A curated list of potential sources relevant to the specified search themes, along with insights into their content. 📌 Example ⚙️ n8n Setup Used n8n version:** 1.105.3 n8n-nodes-serpapi:** 0.1.6 n8n-nodes-globals:** 1.1.0 n8n-nodes-bluesky-enhanced**: 1.6.0 n8n-nodes-duckduckgo-search**: 30.0.4 LLM Model:** mistral-small-latest (API) Platform:** Podman 4.3.1 on Linux Date:** 2025-08-06 ⚡ Requirements to Use / Setup Self-hosted or cloud n8n instance. Install the following custom nodes: SerpAPI, Bluesky, and DuckDuckGo Search. n8n-nodes-serpapi n8n-nodes-duckduckgo-search n8n-nodes-bluesky-enhanced Install the Global Variables Node for enhanced configuration: n8n-nodes-globals (or use Edit Field (Set) node instead) Provide valid credentials to nodes for your preferred LLM model, SerpAPI, and Bluesky. Credentials for GitHub recommended. ⚠️ Notes, Assumptions \& Warnings Ensure compliance with the terms of service of any platforms accessed or discovered in this workflow, particularly concerning data usage and attribution. Monitor API usage to avoid hitting rate limits. The workflow may encounter errors such as 403 responses; in such cases, it will continue by ignoring the affected substep. Duplicate removal is applied, but occasional overlaps might still appear depending on the sources. This workflow assumes familiarity with n8n, APIs, and search engines. Using AI agents (Mistral or substitute LLMs) requires access to their API services and keys. This is not a Curator of News. It is designed to find websites that are relevant and useful to your searches. If you are looking for a relevant news selector, please check this workflow. ℹ️ About Us This workflow was developed by the Hybroht team. Our goal is to create tools that harness the possibilities of technology and more. We aim to continuously improve and expand functionalities based on community feedback and evolving use cases. For questions, reach out via contact@hybroht.com. ⚖️ Warranty & Legal Notice This free workflow is provided "as-is" without any warranties of any kind, either express or implied, including but not limited to the implied warranties of merchantability, fitness for a particular purpose, or non-infringement. By using this workflow, you acknowledge that you do so at your own risk. We shall not be held responsible for any damages, losses, or liabilities arising from the use or inability to use this workflow, including but not limited to any direct, indirect, incidental, or consequential damages. It is your responsibility to ensure that your use of this workflow complies with all applicable laws and regulations.
by Jamot
This n8n template automatically summarizes your WhatsApp group activity from the past week and generates a team report. Why use this? Remote teams rely on chat for communication, but important discussions, decisions, and ideas get buried in message threads and forgotten by Monday. This workflow ensures nothing falls through the cracks. How it works Runs every Monday at 6am to collect the previous week's group messages Groups conversations by participant and analyzes message threads AI summarizes individual member activity into personal reports Combines all individual reports into one comprehensive team overview Posts the final report back to your WhatsApp group to kick off the new week Setup requirements WhatsApp (whapAround.pro) no need Meta API Gemini AI (or alternative LLM of choice) Best practices Use one workflow per WhatsApp group for focused results Filter for specific team members if needed Customize the report tone to match your team culture Adjust the schedule if weekly reports don't suit your team's pace Customization ideas Send reports via email instead of posting to busy groups Include project metrics alongside message summaries Connect to knowledge bases or ticket systems for additional context Perfect for project managers who want to keep distributed teams aligned and ensure important conversations don't get lost in the chat noise.