by Leonard
Open Deep Research - AI-Powered Autonomous Research Workflow Description This workflow automates deep research by leveraging AI-driven search queries, web scraping, content analysis, and structured reporting. It enables autonomous research with iterative refinement, allowing users to collect, analyze, and summarize high-quality information efficiently. How it works 🔹 User Input The user submits a research topic via a chat message. 🧠 AI Query Generation A Basic LLM generates up to four refined search queries to retrieve relevant information. 🔎 SERPAPI Google Search The workflow loops through each generated query and retrieves top search results using the SerpAPI API. 📄 Jina AI Web Scraping Extracts and summarizes webpage content from the URLs obtained via SerpAPI. 📊 AI-Powered Content Evaluation An AI Agent evaluates the relevance and credibility of the extracted content. 🔁 Iterative Search Refinement If the AI finds insufficient or low-quality information, it generates new search queries to improve results. 📜 Final Report Generation The AI compiles a structured markdown report, including sources with citations. Set Up Instructions 🚀 Estimated setup time: ~10-15 minutes ✅ Required API Keys:** SerpAPI → For Google Search results Jina AI → For text extraction OpenRouter → For AI-driven query generation and summarization ⚙️ n8n Components Used:** AI Agents with memory buffering for iterative research Loops to process multiple search queries efficiently HTTP Requests for direct API interactions with SerpAPI and Jina AI 📝 Recommended Enhancements:** Add sticky notes in n8n to explain each step for new users Implement Google Drive or Notion Integration to save reports automatically 🎯 Ideal for: ✔️ Researchers & Analysts - Automate background research ✔️ Journalists - Quickly gather reliable sources ✔️ Developers - Learn how to integrate multiple AI APIs into n8n ✔️ Students - Speed up literature reviews 🔗 Completely free and open-source! 🚀
by Lakshit Ukani
One-way sync between Telegram, Notion, Google Drive, and Google Sheets Who is this for? This workflow is perfect for productivity-focused teams, remote workers, virtual assistants, and digital knowledge managers who receive documents, images, or notes through Telegram and want to automatically organize and store them in Notion, Google Drive, and Google Sheets—without any manual work. What problem is this workflow solving? Managing Telegram messages and media manually across different tools like Notion, Drive, and Sheets can be tedious. This workflow automates the classification and storage of incoming Telegram content, whether it’s a text note, an image, or a document. It saves time, reduces human error, and ensures that media is stored in the right place with metadata tracking. What this workflow does Triggers on a new Telegram message** using the Telegram Trigger node. Classifies the message type** using a Switch node: Text messages are appended to a Notion block. Images are converted to base64, uploaded to imgbb, and then added to Notion as toggle-image blocks. Documents are downloaded, uploaded to Google Drive, and the metadata is logged in Google Sheets. Sends a completion confirmation** back to the original Telegram chat. Setup Telegram Bot: Set up a bot and get the API token. Notion Integration: Share access to your target Notion page/block. Use the Notion API credentials and block ID where content should be appended. Google Drive & Sheets: Connect the relevant accounts. Select the destination folder and spreadsheet. imgbb API: Obtain a free API key from imgbb. Replace placeholder credential IDs and asset URLs as needed in the imported workflow. How to customize this workflow to your needs Change Storage Locations**: Update the Notion block ID or Google Drive folder ID. Switch Google Sheet to log in a different file or sheet. Add More Filters**: Use additional Switch rules to handle other Telegram message types (like videos or voice messages). Modify Response Message**: Personalize the Telegram confirmation text based on the file type or sender. Use a different image hosting service** if you don’t want to use imgbb.
by Jaruphat J.
⚠️ Important Disclaimer: This template is only compatible with a self-hosted n8n instance using a community node. Who is this for? This workflow is ideal for digital content creators, marketers, social media managers, and automation enthusiasts who want to produce fully automated vertical video content featuring inspirational or motivational quotes. Specifically tailored for Thai language, it effectively demonstrates integration of AI-generated imagery, video, ambient sound, and visually appealing quote overlays. What problem is this workflow solving? Manually creating high-quality, vertically formatted quote videos is often repetitive, time-consuming, and involves multiple tedious steps like selecting suitable visuals, editing audio tracks, and correctly overlaying text. Additionally, manual uploading to platforms like YouTube and maintaining accurate content records are prone to errors and inefficiencies. What this workflow does: Fetches a quote, author, and scenic background description from a Google Sheet. Automatically generates a vertical background image using the Flux AI (txt2img) API. Transforms the AI-generated image into a subtly animated cinematic vertical video using the Kling video-generation API. Generates an immersive, ambient background sound using ElevenLabs’ sound generation API. Dynamically overlays the selected Thai-language quote and author text onto the generated video using FFmpeg, ensuring visually appealing typography (e.g., Kanit font). Automatically uploads the final video to YouTube. Updates the resulting YouTube video URL back to the Google Sheet, keeping your content records current and well-organized. Setup Requirements: This workflow requires a self-hosted n8n instance, as the execution of FFmpeg commands is not supported on n8n Cloud. Ensure FFmpeg is installed on your self-hosted environment. API keys and accounts setup for Flux, Kling, ElevenLabs, Google Sheets, Google Drive, and YouTube. Google Sheets Setup: Your Google Sheet must include these columns: Index** Unique identifier for each quote Quote (Thai)** Quote text in Thai language (or your chosen language) Pen Name (Thai)** Author or pen name of the quote's creator Background (EN)** Short English description of the scene (e.g., "sunrise over mountains") Prompt (EN)** Detailed English prompt describing the image/video scene (e.g., "peaceful sunrise with misty mountains") Background Image** URL of AI-generated image (updated automatically) Background Video** URL of generated video (updated automatically) Music Background** URL of generated ambient audio (updated automatically) Video Status** YouTube URL (updated automatically after upload) A ready-to-use Google Sheets template is provided [here (provide your actual link)]. To help you get started quickly, you can use this template spreadsheet. Next steps: Authenticate Google Sheets, Google Drive, YouTube API, Flux AI, Kling API, and ElevenLabs API within n8n. Ensure FFmpeg supports fonts compatible with your chosen language (for Thai, "Kanit" font is recommended). Prepare your Google Sheets with desired quotes, authors, and image/video prompts. How to customize this workflow to your needs: Fonts:** Adjust font type, size, color, and positioning within the provided FFmpeg commands in the workflow’s code nodes. Verify that selected fonts properly support your target language. Media Customization:** Customize the scene descriptions in your Google Sheet to change image/video backgrounds automatically generated by AI. Quote Management:** Easily manage, add, or update quotes and associated details directly via Google Sheets without workflow modifications. Audio Ambiance:** Customize or adjust the ambient sound prompt for ElevenLabs within the workflow’s HTTP Request node to match your video's desired mood. Benefits of using AI-generated content and localized fonts: Leveraging AI-generated visual and audio elements along with localized fonts greatly enhances audience engagement by creating visually appealing, professional-quality content tailored specifically for your target audience. This automated workflow drastically reduces production time and manual effort, enabling rapid, consistent content creation optimized for platforms such as YouTube Shorts, Instagram Reels, and TikTok.
by Obsidi8n
How it works: Send notes from Obsidian via Webhook to start the audio conversion OpenAI converts your text to natural-sounding audio and generates episode descriptions Audio files are stored in Cloudinary and automatically attached to your notes in Obsidian A professional podcast feed is generated, compatible with all major podcast platforms (Apple, Spotify, Google) Set up steps: Install and configure the Post Webhook Plugin in Obsidian Set up Custom Auth credentials in n8n for Cloudinary using the following JSON: { "name": "Cloudinary API", "type": "httpHeaderAuth", "authParameter": { "type": "header", "key": "Authorization", "value": "Basic {{Buffer.from('your_api_key:your_api_secret').toString('base64')}}" } } Configure podcast feed metadata (title, author, cover image, etc.) Note: The second flow is a generic Podcast Feed module that can be reused in any '[...]-to-Podcast' workflow. It generates a standard RSS feed from Google Sheets data and podcast metadata, making it compatible with all major podcast platforms.
by Yang
📽️ What this workflow does This workflow turns a user-submitted form with country or animal names into a cinematic video with animated scenes and immersive ambient audio. Using GPT-4 for prompt generation, Dumpling AI for visual creation,& Replicate for motion animation, ElevenLabs for sound generation, and Creatomate for video stitching, it fully automates video production — from raw idea to rendered file. 🎯 What problem is this solving? Creating engaging multimedia content can take hours. This workflow automates the entire process of ideation, design, and rendering of high-quality cinematic clips, eliminating the need for manual video editing or audio production. 👥 Who is this for? Content creators and educators Digital artists and storytellers Marketers or YouTubers creating short-form visual content No-code/AI automation enthusiasts ⚙️ Setup Instructions ✅ Step 1: Google Sheet Create a Google Sheet with two columns: Title Generated videos Update the Sheet ID and tab name in the final node. ✅ Step 2: Google Drive Create two folders: One for ambient audio tracks One for final generated videos Update the folder IDs in both Google Drive nodes. ✅ Step 3: Credentials Setup Make sure all your API tokens are saved as credentials in n8n. This workflow uses the following integrations: OpenAI (GPT-4) Dumpling AI (via HTTP header) Replicate.com ElevenLabs Google Drive Google Sheets Creatomate ✅ Step 4: Form Fields Ensure your trigger form includes these fields: Title Country 1, Country 2, Country 3, Country 4 Style (e.g., cinematic, epic, fantasy, noir, etc.) 🧩 How it works User Form Submission Kicks off the workflow with the required inputs. Format Inputs Combines all 4 countries/animals into a single array. GPT-4: Generate Visual Prompts Uses GPT-4 to create rich cinematic descriptions per animal/country. Dumpling AI: Create Images Each description becomes a high-quality visual. GPT-4: Create Motion Prompts Each image prompt is rewritten into motion-based video prompts. Replicate: Animate Prompts and images are sent to Replicate’s model for animation. GPT-4: Generate Sound Prompt Based on the style, GPT-4 creates an ambient sound idea. ElevenLabs: Create Ambient Audio Audio is generated and uploaded to Google Drive. Creatomate: Stitch All Media All 4 motion videos and the audio track are stitched into one cinematic output. Upload to Google Drive + Log to Sheet Final video is saved in Drive and logged in Sheets with its title and link. 🛠️ How to Customize 🎨 Modify GPT prompts for different themes (e.g., horror, fantasy, sci-fi). 🧠 Swap animals for characters, objects, or locations. 🎧 Replace ambient sound with ElevenLabs voiceovers or music. 📂 Add metadata logging (generation time, duration, tags). 🧪 Try using alternative video tools like Pika Labs or Runway ML. ✅ Requirements n8n self-hosted or cloud instance Active accounts for: OpenAI, Dumpling AI, Replicate, ElevenLabs, Creatomate Google credentials set up for Drive + Sheets This is a perfect end-to-end automation that showcases the power of AI + automation for video storytelling.
by RealSimple Solutions
Who Is This For? This workflow is designed for AI engineers, automation specialists, and content creators who need a scalable system to dynamically manage prompts stored in GitHub. It eliminates manual updates, enforces required variable checks, and ensures that AI interactions always receive fully processed prompts. 🚀 What Problem Does This Solve? Manually managing AI prompts can be inefficient and error-prone. This workflow: ✅ Fetches dynamic prompts from GitHub ✅ Auto-populates placeholders with values from the setVars node ✅ Ensures all required variables are present before execution ✅ Processes the formatted prompt through an AI agent 🛠 How This Workflow Works This workflow consists of three key branches, ensuring smooth prompt retrieval, variable validation, and AI processing. 1️⃣ Retrieve the Prompt from GitHub (HTTP Request → Extract from File → SetPrompt) The workflow starts manually or via an external trigger. It fetches a text-based prompt stored in a GitHub repository. The Extract from File Node retrieves the content from the GitHub file. The SetPrompt Node stores the prompt, making it accessible for processing. 📌 Note: The prompt must contain n8n expression format variables (e.g., {{ $json.company }}) so they can be dynamically replaced. 2️⃣ Extract & Auto-Populate Variables (Check All Prompt Vars → Replace Variables) A Code Node scans the prompt for placeholders in the n8n expression format ({{ $json.variableName }}). The workflow compares required variables against the setVars node: ✅ If all variables are present, it proceeds to variable replacement. ❌ If any variables are missing, the workflow stops and returns an error listing them. The Replace Variables Node replaces all placeholders with values from setVars. 📌 Example of a properly formatted GitHub prompt: Hello {{ $json.company }}, your product {{ $json.features }} launches on {{ $json.launch_date }}. This ensures seamless replacement when processed in n8n. 3️⃣ AI Processing & Output (AI Agent → Prompt Output) The Set Completed Prompt Node stores the final, processed prompt. The AI Agent Node (Ollama Chat Model) processes the prompt. The Prompt Output Node returns the fully formatted response. 📌 Optional: Modify this to use OpenAI, Claude, or other AI models. ⚠️ Error Handling: Missing Variables If a required variable is missing, the workflow stops execution and provides an error message: ⚠️ Missing Required Variables: ["launch_date"] This ensures no incomplete prompts are sent to AI agents. ✅ Example Use Case 📜 GitHub Prompt File (Using n8n Expressions) Hello {{ $json.company }}, your product {{ $json.features }} launches on {{ $json.launch_date }}. 🔹 Variables in setVars Node { "company": "PropTechPro", "features": "AI-powered Property Management", "launch_date": "March 15, 2025" } ✅ Successful Output Hello PropTechPro, your product AI-powered Property Management launches on March 15, 2025. 🚨 Error Output (If Missing launch_date) ⚠️ Missing Required Variables: ["launch_date"] 🔧 Setup Instructions 1️⃣ Connect Your GitHub Repository Store your prompt in a public or private GitHub repo. The workflow will fetch the raw file using the GitHub API. 2️⃣ Configure the SetVars Node Define the required variables in the SetVars Node. Make sure the variable names match those used in the prompt. 3️⃣ Test & Run Click Test Workflow to execute. If variables are missing, it will show an error. If everything is correct, it will output the fully formatted prompt. ⚡ How to Customize This Workflow 💡 Need CRM or Database Integration? Connect the setVars node to an Airtable, Google Sheets, or HubSpot API to pull variables dynamically. 💡 Want to Modify the AI Model? Replace the Ollama Chat Model with OpenAI, Claude, or a custom LLM endpoint. 📌 Why Use This Workflow? ✅ No Manual Updates Required – Fetches prompts dynamically from GitHub. ✅ Prevents Broken Prompts – Ensures required variables exist before execution. ✅ Works for Any Use Case – Handles AI chat prompts, marketing messages, and chatbot scripts. ✅ Compatible with All n8n Deployments – Works on Cloud, Self-Hosted, and Desktop versions.
by Nick Saraev
AI Facebook Ad Spy Tool with Apify, OpenAI, Gemini & Google Sheets Categories: Competitive Intelligence, Marketing Automation, AI Analysis This workflow creates a comprehensive Facebook ad spy tool that scrapes competitor ads from Facebook's ad library and generates detailed analysis with rewritten versions. The system processes text, image, and video ads using different AI models, providing strategic intelligence for PPC agencies and marketers. Built to be sold as a premium service for $2,000+, this tool combines web scraping, multi-modal AI analysis, and competitor intelligence into one powerful automation. Benefits Complete Competitive Intelligence** - Analyze competitor strategies across all ad formats (text, image, video) Multi-Modal AI Analysis** - Uses GPT-4 Vision for images and Gemini for video content understanding Automated Ad Rewriting** - Generates inspired variations of successful competitor ads Quality Filtering** - Targets high-performing advertisers with significant page likes Scalable Processing** - Handle hundreds of competitor ads with detailed strategic analysis Premium Service Potential** - Easily sold to agencies and marketers for $2,000+ implementations How It Works Facebook Ad Library Scraping: Connects to Facebook's public ad library through Apify's specialized scraper Searches for active ads using customizable keywords and targeting parameters Extracts comprehensive ad data including creative assets, targeting info, and engagement metrics Filters results to focus on high-quality advertisers with substantial page followings Intelligent Content Routing: Automatically categorizes ads into text-only, image-based, or video content types Routes each ad type to specialized processing pipelines optimized for that content format Ensures appropriate AI models are used for each type of creative analysis Maintains data integrity while processing different content formats simultaneously Advanced Video Analysis Pipeline: Downloads video ads directly from Facebook's content delivery network Uploads videos to Google Drive for temporary storage and processing Initiates Gemini AI video upload sessions for multi-modal analysis Uses Gemini's advanced video understanding to generate detailed content descriptions Processes video narrative, visual elements, messaging strategy, and target audience insights Image and Text Processing: Analyzes image ads using GPT-4 Vision for comprehensive visual content understanding Processes text-only ads using GPT-4 for messaging strategy and copywriting analysis Identifies key persuasion techniques, target demographics, and messaging frameworks Generates detailed competitive intelligence reports for each ad format Strategic Intelligence Generation: Creates comprehensive summaries analyzing competitor messaging strategies and target audiences Generates rewritten ad copy that captures successful elements while avoiding direct copying Produces recreation prompts for images and videos that can be used with AI generation tools Organizes all insights in structured Google Sheets database for easy analysis and reporting Required Setup Configuration Apify Integration: Sign up for Apify account and obtain API key Replace <your-apify-api-key-here> in "Run Ad Library Scraper" node Customize Facebook Ad Library search URLs with your target keywords and regions AI Service Configuration: OpenAI API**: Set up for text analysis and image understanding with GPT-4 Vision Gemini API**: Configure for advanced video content analysis and description Replace <your-gemini-api-key-here> in all Gemini-related nodes Google Services Setup: Google Drive**: Configure OAuth for temporary video storage during Gemini processing Google Sheets**: Create results database with proper column structure for ad intelligence storage Facebook Ad Library Search Configuration: Customize the search parameters in the Apify scraper Google Sheets Database Structure: Create a sheet with these columns: ad_archive_id - Unique Facebook ad identifier page_id - Advertiser's Facebook page ID page_name - Advertiser's business name page_url - Link to advertiser's Facebook page type - Ad format (text, image, or video) date_added - When ad was analyzed summary - Detailed competitive intelligence analysis rewritten_ad_copy - AI-generated inspired version image_prompt - Description for recreating image ads video_prompt - Description for recreating video ads Business Use Cases PPC Agencies - Offer comprehensive competitor analysis services to clients for strategic advantage Marketing Teams - Research competitor strategies and messaging before launching new campaigns E-commerce Businesses - Analyze successful ads in your industry for creative inspiration SaaS Companies - Study how competitors position their products and target audiences Course Creators - Research educational content marketing approaches and messaging strategies Affiliate Marketers - Identify successful promotional strategies and high-converting ad formats Difficulty Level: Advanced Estimated Build Time: 3-4 hours Monthly Operating Cost: ~$200 (Apify + OpenAI + Gemini + Google Workspace APIs) Watch My Complete Build Process Want to see exactly how I built this entire Facebook ad spy system from scratch? I walk through the complete development process live, including API integrations, multi-modal AI setup, error handling, and the exact business strategy for selling this as a premium service. 🎥 Watch My Live Build: "Build A Facebook Ads Spy Tool With N8N (Sell for $2k+)" This comprehensive tutorial shows the real development process - including complex API orchestration, multi-modal AI integration, and proven strategies for monetizing competitive intelligence systems. Set Up Steps Apify Scraper Configuration: Set up Apify account and configure Facebook Ad Library scraper Customize search parameters for your target industries and regions Configure result limits and filtering parameters for quality control Test scraper with sample searches to verify data quality Multi-Modal AI Setup: Configure OpenAI API credentials for text and image analysis Set up Gemini API access for advanced video content understanding Configure appropriate rate limits and error handling for API stability Test AI analysis with sample ads to optimize prompt quality Google Services Integration: Set up Google Drive OAuth for temporary video storage during processing Create Google Sheets database with proper column structure for intelligence storage Configure sharing permissions and access controls for team collaboration Test complete data flow from scraping to final intelligence reports Quality Control and Filtering: Configure page likes threshold in "Filter For Likes" node (recommend 1,000+ for quality) Adjust content routing logic in Switch node based on your analysis needs Set up error handling and retry logic for reliable large-scale processing Test complete workflow with various ad types to ensure proper routing Advanced Customization: Customize AI prompts for your specific industry analysis needs Configure additional filtering criteria beyond page likes Set up automated scheduling for regular competitor monitoring Add custom fields to database for tracking specific competitive metrics Advanced Features Scale the system with additional capabilities: Industry-Specific Analysis - Customize prompts and filters for different verticals Trend Tracking - Monitor messaging changes over time for strategic insights Performance Correlation - Cross-reference ad engagement with business outcomes Alert Systems - Notify when competitors launch new campaign types Custom Reporting - Generate client-ready intelligence reports automatically Integration Extensions - Connect to CRM and marketing platforms for strategic workflow Important Considerations API Rate Limits - Built-in delays and error handling prevent service interruptions Content Rights - System generates inspired variations, not direct copies, for legal compliance Data Storage - Organize intelligence database for easy client reporting and analysis Scalability - Batch processing handles hundreds of ads efficiently without blocking Quality Assurance - Filtering logic ensures analysis focuses on successful, high-quality advertisers Why This System Works The competitive advantage lies in comprehensive multi-modal analysis: Complete format coverage - analyzes text, image, and video ads with appropriate AI models Strategic depth - goes beyond basic scraping to provide actionable intelligence Automation scale - processes competitor research that would take weeks manually Premium positioning - advanced AI analysis justifies higher service pricing Immediate value - clients receive actionable insights within hours of setup Check Out My Channel For more advanced automation systems that generate real business results and premium service opportunities, explore my YouTube channel where I share proven strategies for building profitable automation businesses.
by Jimleuk
This n8n template takes a video and extracts frames from it which are used with a multimodal LLM to generate a script. The script is then passed to the same multimodal LLM to generate a voiceover clip. This template was inspired by Processing and narrating a video with GPT's visual capabilities and the TTS API How it works Video is downloaded using the HTTP node. Python code node is used to extract the frames using OpenCV. Loop node is used o batch the frames for the LLM to generate partial scripts. All partial scripts are combined to form the full script which is then sent to OpenAI to generate audio from it. The finished voiceover clip is uploaded to Google Drive. Sample the finished product here: https://drive.google.com/file/d/1-XCoii0leGB2MffBMPpCZoxboVyeyeIX/view?usp=sharing Requirements OpenAI for LLM Ideally, a mid-range (16GB RAM) machine for acceptable performance! Customising this workflow For larger videos, consider splitting into smaller clips for better performance Use a multimodal LLM which supports fully video such as Google's Gemini.
by Chris Carr
Use Case When creating chatbots that interface through applications such as Telegram and WhatsApp, users can often sends multiple shorter messages in quick succession, in place of a single, longer message. This workflow accounts for this behaviour. What it Does This workflow allows users to send several messages in quick succession, treating them as one coherent conversation instead of separate messages requiring individual responses. How it Works When messages arrive, they are stored in a Supabase PostgreSQL table The system waits briefly to see if additional messages arrive If no new messages arrive within the waiting period, all queued messages are: Combined and processed as a single conversation Responded to with one unified reply Deleted from the queue Setup Create a table in Supabase called message_queue. It needs to have the following columns: user_id (uint8), message (text), and message_id (uint8) Add your Telegram, Supabase, OpenAI, and PostgreSQL credentials Activate the workflow and test by sending multiple messages the Telegram bot in one go Wait ten seconds after which you will receive a single reply to all of your messages How to Modify it to Your Needs Change the value of Wait Amount in the Wait 10 Seconds node in order to to modify the buffering window Add a System Message to the AI Agent to tailor it to your specific use case Replace the OpenAI sub-node to use a different language model
by Julian Kaiser
This automated workflow scrapes and processes the monthly "Who is Hiring" thread from Hacker News, transforming raw job listings into structured data for analysis or integration with other systems. Perfect for job seekers, recruiters, or anyone looking to monitor tech job market trends. How it works Automatically fetches the latest "Who is Hiring" thread from Hacker News Extracts and cleans relevant job posting data using the HN API Splits and processes individual job listings into structured format Parses key information like location, role, requirements, and company details Outputs clean, structured data ready for analysis or export Set up steps Configure API access to [Hacker News](https://github.com/HackerNews/API ) (no authentication required) Follow the steps to get your cURL command from https://hn.algolia.com/ Set up desired output format (JSON structured data or custom format) Optional: Configure additional parsing rules for specific job listing information Optional: Set up integration with preferred storage or analysis tools The workflow transforms unstructured job listings into clean, structured data following this pattern: Input: Raw HN thread comments Process: Extract, clean, and parse text Output: Structured job listing data This template saves hours of manual work collecting and organizing job listings, making it easier to track and analyze tech job opportunities from Hacker News's popular monthly hiring threads.
by Franz
🕸️ Dynamic Website Change Monitor with Smart Email Alerts Never miss important website updates again! This workflow automatically tracks changes on dynamic websites (think React apps, JavaScript-heavy sites) and sends you instant email notifications when something changes. Perfect for keeping tabs on competitors, monitoring product updates, or staying on top of important announcements. ✨ What makes this special? 🚀 Handles Dynamic Websites: Uses Firecrawl API to scrape JavaScript-rendered content that basic scrapers can't touch 📧 Smart Email Alerts: Only sends notifications when content actually changes (no spam!) 📊 Historical Tracking: Keeps a complete log of all changes in Google Sheets 🛡️ Bulletproof: Continues working even if one part fails ⚡ Ready to Deploy: Webhook-triggered, perfect for cron jobs or external schedulers 🎯 Perfect for monitoring: Competitor pricing pages Job board postings Product availability updates News sites for breaking stories API documentation changes Terms of service updates 🛠️ What you'll need to get started: API Accounts & Keys: Firecrawl Account 🔥 Sign up at firecrawl.dev Grab your API key from the dashboard Create a "Bearer Auth" credential in n8n Google Cloud Setup ☁️ Enable Google Sheets API Enable Gmail API Set up OAuth2 credentials Add both as credentials in n8n Google Sheets Document 📋 Create a new spreadsheet Add two tabs: "Log" and "comparison" Follow the structure outlined in the workflow notes 🚀 How it works: Webhook receives trigger → Starts the monitoring process Firecrawl scrapes website → Gets fresh content (even JavaScript-rendered!) Smart comparison → Checks against previously stored content Change detected? → If yes, send email + log everything Update storage → Prepares for next monitoring cycle ⚙️ Setup Steps: Import this workflow into your n8n instance Configure credentials for Firecrawl, Google Sheets, and Gmail Update the target URL in the Firecrawl node Set your email address in the Gmail node Create your Google Sheets with the required structure Test it manually first, then activate! 🎨 Customize it your way: Target any website** by updating the URL Change email templates** to match your style Adjust monitoring frequency** with external cron jobs Switch between markdown/HTML** extraction formats Fine-tune change detection** sensitivity 🔧 Troubleshooting: Firecrawl errors?** Check your API key and rate limits Google Sheets issues?** Verify OAuth permissions and sheet structure Email not sending?** Check Gmail API quotas and spam folders Webhook problems?** Make sure the workflow is activated Ready to never miss another website change? Let's get this automation running! 🎉
by Ranjan Dailata
Who this is for? This workflow is designed for professionals and teams who need real-time, structured insights from Perplexity Search results without manual effort. What problem is this workflow solving? This n8n workflow solves the problem of automating Perplexity Search result extraction, cleanup, summarization, and AI-enhanced formatting for downstream use like sending the results to a webhook or another system. What this workflow does Automates Perplexity Search via Bright Data Uses Bright Data’s proxy-based SERP API to run a Google Search query programmatically. Makes the process repeatable and scriptable with different search terms and regions/zones. Cleans and Extracts Useful Content The Readable Data Extractor uses LLM-based cleaning to remove HTML/CSS/JS from the response and extract pure text data. Converts messy, unstructured web content into structured, machine-readable format. Summarizes Search Results Through the Gemini Flash + Summarization Chain, it generates a concise summary of the search results. Ideal for users who don’t have time to read full pages of search results. Formats Data Using AI Agent The AI Agent acts like a virtual assistant that: - Understands search results Formats them in a readable, JSON-compatible form Prepares them for webhook delivery Delivers Results to Webhook Sends the final summary + structured search result to a webhook (could be your app, a Slack bot, Google Sheets, or CRM). Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Perplexity Search Request node with the prompt you wish to perform the search. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs 1. Change the Perplexity Search Input Default: It searches a fixed query or dataset. Customize: Accept input from a Google Sheet, Airtable, or a form. Auto-trigger searches based on keywords or schedules. 2. Customize Summarization Style (LLM Output) Default: General summary using Google Gemini or OpenAI. Customize: Add tone: formal, casual, technical, executive-summary, etc. Focus on specific sections: pricing, competitors, FAQs, etc. Translate the summaries into multiple languages. Add bullet points, pros/cons, or insight tags. 3.Choose Where the Results Go Options: Email, Slack, Notion, Airtable, Google Docs, or a dashboard. Auto-create content drafts for WordPress or newsletters. Feed into CRM notes or attach to Salesforce leads.