by Budi SJ
Automated Financial Reporting Using Google Vision OCR, Telegram & Google Sheets This workflow automates the process of recording financial transactions from photos of receipts or shopping receipts. Users simply send an image of the receipt via Telegram. The image is processed using the Google Vision API to detect text, then extracted and structured by LLM via OpenRouter. The final result is saved to Google Sheets and also displayed to the user via a Telegram bot. 🧾 Google Sheets Template Create a Google Sheet using this template: Financial Reporting 🛠️ Key Features The workflow starts when a user sends a photo of a receipt to the Telegram bot. The image is converted to text using the Google Vision API's OCR. Data processing with LLM (OpenRouter) helps identify and structure transaction elements such as: date, vendor name & address, receipt/invoice number, item list (product name, quantity, unit price, total), and transaction category. Cleaned and structured data is automatically recorded to Google Sheets per item. The system also sends a summary of the recording results in an easy to read text format. Users can also send text messages to the bot to query stored transaction data, which will be answered by a Google Sheets-based AI Agent. 🔧 Requirements Active Telegram Bot + API Token Google Vision API Key OpenRouter Account + API Key Google Sheets connected to n8n 🧩 Setup Instructions Replace all API keys and tokens with your own in the relevant nodes. Google Vision API Key: Set in 'Set Vision API' node. Telegram Bot Token: Set in 'Set Telegram Token' node and all Telegram nodes. OpenRouter API Key: Set in all OpenRouter nodes. Google Sheets: Connect your own Google Sheets credential. Use the provided Google Sheets template or your own. Activate the workflow after configuration. (Optional) Review sticky notes for step-by-step explanations.
by Incrementors
LinkedIn & Indeed Job Scraper with Bright Data & Google Sheets Export Overview This n8n workflow automates the process of scraping job listings from both LinkedIn and Indeed platforms simultaneously, combining results, and exporting data to Google Sheets for comprehensive job market analysis. It integrates with Bright Data for professional web scraping, Google Sheets for data storage, and provides intelligent status monitoring with retry mechanisms. Workflow Components 1. 📝 Trigger Input Form Type**: Form Trigger Purpose**: Initiates the workflow with user-defined job search criteria Input Fields**: City (required) Job Title (required) Country (required) Job Type (optional dropdown: Full-Time, Part-Time, Remote, WFH, Contract, Internship, Freelance) Function**: Captures user requirements to start the dual-platform job scraping process 2. 🧠 Format Input for APIs Type**: Code Node (JavaScript) Purpose**: Prepares and formats user input for both LinkedIn and Indeed APIs Processing**: Standardizes location and job title formats Creates API-specific input structures Generates custom output field configurations Function**: Ensures compatibility with both Bright Data datasets 3. 🚀 Start Indeed Scraping Type**: HTTP Request (POST) Purpose**: Initiates Indeed job scraping via Bright Data Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_lpfll7v5hcqtkxl6l Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: jobid, company_name, job_title, description_text location, salary_formatted, company_rating apply_link, url, date_posted, benefits 4. 🚀 Start LinkedIn Scraping Type**: HTTP Request (POST) Purpose**: Initiates LinkedIn job scraping via Bright Data (parallel execution) Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_l4dx9j9sscpvs7no2 Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: job_posting_id, job_title, company_name, job_location job_summary, job_employment_type, job_base_pay_range apply_link, url, job_posted_date, company_logo 5. 🔄 Check Indeed Status Type**: HTTP Request (GET) Purpose**: Monitors Indeed scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if Indeed dataset scraping is complete 6. 🔄 Check LinkedIn Status Type**: HTTP Request (GET) Purpose**: Monitors LinkedIn scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if LinkedIn dataset scraping is complete 7. ⏱️ Wait Nodes (60 seconds each) Type**: Wait Node Purpose**: Implements intelligent polling mechanism Duration**: 1 minute Function**: Pauses workflow before rechecking scraping status to prevent API overload 8. ✅ Verify Indeed Completion Type**: IF Condition Purpose**: Evaluates Indeed scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 9. ✅ Verify LinkedIn Completion Type**: IF Condition Purpose**: Evaluates LinkedIn scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 10. 📊 Validate Indeed Data Type**: IF Condition Purpose**: Ensures Indeed returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch Indeed data False: Skips Indeed data retrieval 11. 📊 Validate LinkedIn Data Type**: IF Condition Purpose**: Ensures LinkedIn returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch LinkedIn data False: Skips LinkedIn data retrieval 12. 📥 Fetch Indeed Data Type**: HTTP Request (GET) Purpose**: Retrieves final Indeed job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed Indeed job data 13. 📥 Fetch LinkedIn Data Type**: HTTP Request (GET) Purpose**: Retrieves final LinkedIn job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed LinkedIn job data 14. 🔗 Merge Results Type**: Merge Node Purpose**: Combines Indeed and LinkedIn job results Mode**: Merge all inputs Function**: Creates unified dataset from both platforms 15. 📊 Save to Google Sheet Type**: Google Sheets Node Purpose**: Exports combined job data for analysis Operation**: Append rows Target**: "Compare" sheet in specified Google Sheet document Data Mapping**: Job Title, Company Name, Location Job Detail (description), Apply Link Salary, Job Type, Discovery Input Workflow Flow Input Form → Format APIs → [Indeed Trigger] + [LinkedIn Trigger] ↓ ↓ Check Status Check Status ↓ ↓ Wait 60s Wait 60s ↓ ↓ Verify Ready Verify Ready ↓ ↓ Validate Data Validate Data ↓ ↓ Fetch Indeed Fetch LinkedIn ↓ ↓ └─── Merge Results ───┘ ↓ Save to Google Sheet Configuration Requirements API Keys & Credentials Bright Data API Key**: Required for both LinkedIn and Indeed scraping Google Sheets OAuth2**: For data storage and export access n8n Form Webhook**: For user input collection Setup Parameters Google Sheet ID**: Target spreadsheet identifier Sheet Name**: "Compare" tab for job data export Form Webhook ID**: User input form identifier Dataset IDs**: Indeed: gd_lpfll7v5hcqtkxl6l LinkedIn: gd_l4dx9j9sscpvs7no2 Key Features Dual Platform Scraping Simultaneous LinkedIn and Indeed job searches Parallel processing for faster results Comprehensive job market coverage Platform-specific field extraction Intelligent Status Monitoring Real-time scraping progress tracking Automatic retry mechanisms with 60-second intervals Data validation before processing Error handling and timeout management Smart Data Processing Unified data format from both platforms Intelligent field mapping and standardization Duplicate detection and removal Rich metadata extraction Google Sheets Integration Automatic data export and storage Organized comparison format Historical job search tracking Easy sharing and collaboration Form-Based Interface User-friendly job search form Flexible job type filtering Multi-country support Real-time workflow triggering Use Cases Personal Job Search Comprehensive multi-platform job hunting Automated daily job searches Organized opportunity comparison Application tracking and management Recruitment Services Client job search automation Market availability assessment Competitive salary analysis Bulk candidate sourcing Market Research Job market trend analysis Salary benchmarking studies Skills demand assessment Geographic opportunity mapping HR Analytics Competitor hiring intelligence Role requirement analysis Compensation benchmarking Talent market insights Technical Notes Polling Interval**: 60-second status checks for both platforms Result Limiting**: Maximum 2 jobs per input per platform Data Format**: JSON with structured field mapping Error Handling**: Comprehensive error tracking in all API requests Retry Logic**: Automatic status rechecking until completion Country Support**: Adaptable domain selection (indeed.com, fr.indeed.com) Form Validation**: Required fields with optional job type filtering Merge Strategy**: Combines all results from both platforms Export Format**: Standardized Google Sheets columns for easy analysis Sample Data Output | Field | Description | Example | |-------|-------------|---------| | Job Title | Position title | "Senior Software Engineer" | | Company Name | Hiring organization | "Tech Solutions Inc." | | Location | Job location | "San Francisco, CA" | | Job Detail | Full description | "We are seeking a senior developer..." | | Apply Link | Direct application URL | "https://company.com/careers/123" | | Salary | Compensation info | "$120,000 - $150,000" | | Job Type | Employment details | "Full-time, Remote" | Setup Instructions Import Workflow: Copy JSON configuration into n8n Configure Bright Data: Add API credentials for both datasets Setup Google Sheets: Create target spreadsheet and configure OAuth Update References: Replace placeholder IDs with your actual values Test Workflow: Submit test form and verify data export Activate: Enable workflow and share form URL with users For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Lucas Peyrin
How it works This template is a hands-on, practical exam designed to test your understanding of the fundamental JSON data types. It's the perfect way to solidify your knowledge after learning the basics. Think of it as the "driver's test" that comes after the "theory lesson". You'll be given a series of tasks, and the workflow will automatically check your answers, providing instant feedback. The test is broken down into six sequential challenges, each focusing on a core data type: String: Writing text values correctly. Number: Using integers and decimals. Boolean: Working with true and false. Null: Representing a non-existant value. Array: Creating ordered lists of data. Object: Building nested key-value structures. For each challenge, you'll modify a Set node with the correct JSON syntax. When you execute the workflow, a corresponding IF node will validate your input. A green path means you passed and can move to the next challenge. A red path means you need to try again! Set up steps Setup time: < 1 minute This workflow is a self-contained test and requires no setup or credentials. Read the instructions on the main sticky note to understand the goal. Start with the first challenge, "Test - String". Activate and modify the node according to the instructions on the purple sticky note next to it. Click "Execute Workflow". If the execution path is green, you've passed! You can move on to the next "Test" node in the sequence to continue. If the path is red, read the hint in the error message and try again. Repeat the process until you reach the final success message. Good luck!
by CustomJS
n8n Workflow: Automating Website Screenshots from Google Sheets This n8n workflow captures screenshots of websites listed in a Google Sheet and saves them to Google Drive using the CustomJS PDF Toolkit. @custom-js/n8n-nodes-pdf-toolkit Features Monitors** a Google Sheet for new rows with website URLs. Captures** screenshots of the websites using the CustomJS PDF Toolkit. Uploads** the screenshots to a specified Google Drive folder. Notice Community nodes can only be installed on self-hosted instances of n8n. Requirements Self-hosted** n8n instance A Google Sheets document containing website URLs and Titles. A Google Drive folder to store the screenshots. A CustomJS API key for website screenshots. n8n credentials** for Google Sheets and Google Drive. Workflow Steps Google Sheets Trigger Monitors a specified sheet for new rows. Extracts the URL and Title from the row. Website Screenshot Node Uses CustomJS PDF Toolkit to take a screenshot of the given URL. Google Drive Upload Saves the screenshot to a specific Google Drive folder. Uses the Title column as the filename. Setup Guide 1. Connect Google Sheets Ensure your Google Sheet has a column named Url for website URLs and Name for website names. Set up Google Sheets credentials in n8n. 2. Configure CustomJS API Sign up at CustomJS. Retrieve your API key from the profile page. Add your API key as n8n credentials. 3. Set Up Google Drive Create a folder in Google Drive to store screenshots. Copy the folder ID and set it in the Google Drive node in n8n. Perfect for: Website monitoring** Generating visual archives of web pages** Automating content curation** This workflow streamlines the process of capturing and organizing website screenshots efficiently.
by Jez
Summary This n8n workflow implements an AI-powered agent that intelligently uses the Brave Search API (via an external MCP service like Smithery) to perform both web and local searches. It understands natural language queries, selects the appropriate search tool, and exposes this enhanced capability as a single, callable MCP tool. Key Features 🤖 Intelligent Tool Selection: AI agent decides between Brave's web search and local search tools based on user query context. 🌐 MCP Microservice: Exposes complex search logic as a single, easy-to-integrate MCP tool (call_brave_search_agent). 🧠 Powered by Google Gemini: Utilizes the gemini-2.5-flash-preview-05-20 LLM for advanced reasoning. 🗣️ Conversational Memory: Remembers context within a single execution flow. 📝 Customizable System Prompt: Tailor the AI's behavior and responses. 🧩 Modular Design: Connects to external Brave Search MCP tools (e.g., from Smithery). Benefits 🔌 Simplified Integration: Easily add advanced, AI-driven search capabilities to other applications or agent systems. 💸 Reduced Client-Side LLM Costs: Offloads complex prompting and tool orchestration to n8n, minimizing token usage for client-side LLMs. 🔧 Centralized Logic: Manage and update search strategies and AI behavior in one place. 🚀 Extensible: Can be adapted to use other search tools or incorporate more complex decision-making. Nodes Used @n8n/n8n-nodes-langchain.mcpTrigger (MCP Server Trigger) @n8n/n8n-nodes-langchain.toolWorkflow @n8n/n8n-nodes-langchain.agent (AI Agent) @n8n/n8n-nodes-langchain.lmChatGoogleGemini (Google Gemini Chat Model) n8n-nodes-mcp.mcpClientTool (MCP Client Tool - for Brave Search) @n8n/n8n-nodes-langchain.memoryBufferWindow (Simple Memory) n8n-nodes-base.executeWorkflowTrigger (Workflow Start - for direct execution/testing) Prerequisites An active n8n instance (v1.22.5+ recommended). A Google AI API key for using the Gemini LLM. Access to an external MCP service that provides Brave Search tools (e.g., a Smithery account configured with their Brave Search MCP). This includes the MCP endpoint URL and any necessary authentication (like an API key for Smithery). Setup Instructions Import Workflow: Download the Brave_Search_Smithery_AI_Agent_MCP_Server.json file and import it into your n8n instance. Configure LLM Credential: Locate the 'Google Gemini Chat Model' node. Select or create an n8n credential for "Google Palm API" (used for Gemini), providing your Google AI API key. Configure Brave Search MCP Credential: Locate the 'brave_web_search' and 'brave_local_search' (MCP Client) nodes. Create a new n8n credential of type "MCP Client HTTP API". Name: e.g., Smithery Brave Search Access Base URL: Enter the URL of your Brave Search MCP endpoint from your provider (e.g., https://server.smithery.ai/@YOUR_PROFILE/brave-search/mcp). Authentication: If your MCP provider requires an API key, select "Header Auth". Add a header with the name (e.g., X-API-Key) and value provided by your MCP service. Assign this newly created credential to both the 'brave_web_search' and 'brave_local_search' nodes. Note MCP Trigger Path: Open the 'Brave Search MCP Server Trigger' node. Copy its unique 'Path' (e.g., /cc8cc827-3e72-4029-8a9d-76519d1c136d). You will combine this with your n8n instance's base URL to get the full endpoint URL for clients. How to Use This workflow exposes an MCP tool named call_brave_search_agent. External clients can call this tool via the URL derived from the 'Brave Search MCP Server Trigger'. Example Client MCP Configuration (e.g., for Roo Code): "n8n-brave-search-agent": { "url": "https://YOUR_N8N_INSTANCE/mcp/cc8cc827-3e72-4029-8a9d-76519d1c136d/sse", "alwaysAllow": [ "call_brave_search_agent" ] } Replace YOUR_N8N_INSTANCE with your n8n's public URL and ensure the path matches your trigger node. Example Request: Send a POST request to the trigger URL with a JSON body: { "input": { "query": "best coffee shops in London" } } The agent will stream its response, including the summarized search results. Customization AI Behavior:* Modify the System Prompt within the *'Brave Search AI Agent'** node to fine-tune its decision-making, response style, or how it uses the search tools. LLM Choice:* Replace the *'Google Gemini Chat Model'** node with any other compatible LLM node supported by n8n. Search Tools:** Adapt the workflow to use different or additional search tools by modifying the MCP Client nodes and updating the AI Agent's system prompt and tool definitions. Further Information GitHub Repository: https://github.com/jezweb/n8n The workflow includes extensive sticky notes for in-canvas documentation. Author Jeremy Dawes (Jezweb)
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for This n8n-powered automation uses Bright Data's MCP Client to extract real-time data from a price drop site listing the amazon products, including price changes and related product details. The extracted data is enriched with structured data transformation, content summarization, and sentiment analysis using Google Gemini LLM. The Amazon Price Drop Intelligence Engine is designed for: Ecommerce Analysts** who need timely updates on competitor pricing trends Brand Managers** seeking to understand consumer sentiment around pricing Data Scientists** building pricing models or enrichment pipelines Affiliate Marketers** looking to optimize campaigns based on dynamic pricing AI Developers** automating product intelligence pipelines What problem is this workflow solving? This workflow solves several key pain points: Reliable Scraping: Uses Bright Data MCP, a managed crawling platform that handles proxies, captchas, and site structure changes automatically. Insight Generation: Transforms unstructured HTML into structured data and then into human-readable summaries using Google Gemini LLM. Sentiment Context: Goes beyond raw pricing data to reveal how customers feel about the price change, helping businesses and researchers measure consumer reaction. Automated Reporting: Aggregates and stores data for easy access and downstream automation (e.g., dashboards, notifications, pricing models). What this workflow does Scrape price drop site with Bright Data MCP The workflow begins by scraping targeted price drop site for Amazon listings using Bright Data's Model Context Protocol (MCP). You can configure this to target: Structured Data Extraction Once the HTML content is retrieved, Google Gemini is employed to: Parse and structure the product information (title, price, discount, brand, ratings) Summarization & Sentiment Analysis The extracted data is passed through an LLM chain to: Generate a concise summary of the product and its recent price movement Perform sentiment analysis on user reviews and public perception Store the Results Save to disk for archiving or bulk processing Updated in a Google Sheet, making it instantly shareable with your team or integrated into a BI dashboard Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Target different platforms**: Switch Amazon for Walmart, eBay, or any ecommerce source using Bright Data’s flexible scraping infrastructure. Enrich with more LLM tasks**: Add brand tone analysis, category classification, or competitive benchmarking using Gemini prompts. Visualize output**: Pipe the Google Sheet to Looker Studio, Tableau, or Power BI. Notification integrations**: Add Slack, Discord, or email notifications for price drop alerts.
by SuperAgent
Who is this template for? This template is ideal for small businesses, agencies, and solo professionals who want to automate appointment scheduling and caller follow-up through a voice-based AI receptionist. If you’re using tools like Google Calendar, Airtable, and Vapi (Twilio), this setup is for you. What problem does this workflow solve? Manual call handling, appointment booking, and email coordination can be time-consuming and prone to errors. This workflow solves that by automating the receptionist role: answering calls, checking calendar availability, managing appointments, and storing call summaries—all without human intervention. What this workflow does This Agent Receptionist manages inbound voice calls and scheduling tasks using Vapi and Google Calendar. It checks availability, books or updates calendar events, sends email confirmations, and logs call details into Airtable. The workflow includes built-in logic for slot management, email triggers, and storing call transcripts. Setup Instructions Duplicate Airtable Base: Use this Airtable base templateBASE LINK Import Workflow: Load provided JSON into your n8n instance. Credentials: Connect your Google Calendar and Airtable credentials in n8n. Activate Workflow: Enable workflow to get live webhook URLs. Vapi Configuration: Paste provided system prompt into Vapi Assistant. Link the appropriate webhook URLs from n8n (GetSlots, BookSlots, UpdateSlots, CancelSlots, and end-of-call report). Disclaimer Optimized for cloud-hosted n8n instances. Self-hosted users should verify webhook and credential setups.
by Nick Saraev
AI Ad Scraper & Image Generator with Facebook Ad Library Categories: PPC Automation, Creative Generation, Competitive Intelligence This workflow creates an end-to-end ad library scraper and AI image spinner system that automatically discovers competitor ads, analyzes their design elements, and generates multiple unique variations ready for your own campaigns. Built to eliminate 60-70% of manual creative work for PPC agencies, this system transforms competitor research into actionable ad variants in minutes. Benefits Automated Competitor Research** - Scrapes Facebook Ad Library for active competitor campaigns automatically AI-Powered Creative Analysis** - Uses OpenAI vision to comprehensively analyze ad design elements and copy Intelligent Image Generation** - Creates 3+ unique variations per source ad while maintaining effective layouts Complete Asset Organization** - Automatically organizes source ads and generated variations in structured Google Drive folders Campaign-Ready Output** - Generates Google Sheets database with direct links to all assets for immediate campaign deployment Massive Time Savings** - Replaces hours of manual creative work with automated competitive intelligence and generation How It Works Facebook Ad Library Scraping: Connects to Facebook's Ad Library through Apify scraper integration Searches active ads based on keywords, industries, or competitor targeting Filters for image-based ads and removes video-only content for processing Intelligent Asset Organization: Creates unique Google Drive folder structure for each scraped ad campaign Separates source competitor ads from AI-generated variations Maintains organized asset library for easy campaign management and iteration AI-Powered Creative Analysis: Uses OpenAI's vision model to comprehensively describe each competitor ad Identifies design elements, color schemes, layout patterns, and messaging approaches Generates detailed creative briefs for intelligent variation generation Smart Image Variation System: Creates 3 unique style variations per source ad using advanced AI prompting Maintains effective layout structures while changing colors, fonts, and styling Customizes messaging and branding to match your business requirements Campaign Database Integration: Logs all source ads and generated variations in organized Google Sheets Provides direct links to all assets for immediate campaign deployment Tracks performance data and creative iterations for ongoing optimization Required Setup Configuration Google Drive Structure: The workflow automatically creates this folder organization: PPC Thievery (Parent Folder) ├── [Ad Archive ID] (Per Campaign) │ ├── 1. Source Assets (Original competitor ads) │ └── 2. Spun Assets (AI-generated variations) Google Sheets Database Columns: timestamp - Unique record identifier ad_archive_id - Facebook's internal ad identifier page_id - Advertiser's Facebook page ID original_image_url - Direct link to source competitor ad page_name - Advertiser's business name ad_body - Original ad copy text date_scraped - When the ad was discovered spun_prompts - AI-generated variation instructions asset_folder - Link to campaign's Google Drive folder source_folder - Link to original ads folder spun_folder - Link to generated variations folder direct_spun_image_link - Direct link to generated ad image Set Variables Configuration: Update these values in the "Set Variables" node: googleDriveFolderId - Your parent Google Drive folder ID changeRequest - Your brand-specific variation instructions spreadsheetId - Your Google Sheets database ID Apify API Setup: Create Apify account and obtain API key Replace <your-apify-api-key-here> with actual credentials Customize search terms in the JSON body for your target competitors Adjust scraping count (default: 20 ads per run) Business Use Cases PPC Agencies** - Automate competitive research and creative generation for client campaigns E-commerce Brands** - Monitor competitor advertising strategies and generate response campaigns Marketing Teams** - Scale creative production with AI-powered competitive intelligence Freelance Marketers** - Offer advanced competitive analysis and creative services to clients SaaS Companies** - Track competitor messaging and generate differentiated ad variations Agency Teams** - Replace manual creative research with automated competitive intelligence systems Revenue Potential This system revolutionizes PPC agency economics: 60-70% reduction** in manual creative work and competitive research time 3-5x faster** campaign launch times with ready-to-use creative assets $2,000-$5,000 service value** for comprehensive competitive intelligence and creative generation Scalable competitive advantage** through automated monitoring of competitor campaigns Premium positioning** offering AI-powered creative intelligence that competitors can't match manually Difficulty Level: Advanced Estimated Build Time: 2-3 hours Monthly Operating Cost: ~$100 (Apify + OpenAI + Google APIs) Watch My Complete Live Build Want to see me build this entire system from scratch? I walk through every component live - including the ad library integration, AI analysis setup, image generation pipeline, and all the debugging that goes into creating a production-ready competitive intelligence system. 🎥 See My Live Build Process: "Ad Library Scraper & AI Image Spinner System (N8N Build)" This comprehensive tutorial shows the real development process - including advanced AI prompting for image generation, competitive analysis strategies, and the organizational systems that make this scalable for agency use. Set Up Steps Initial Database Setup: Run the initialization flow once to create your Google Drive folder and Sheets database Copy the generated folder ID and spreadsheet ID into the "Set Variables" node Configure your brand-specific change request template for consistent output Apify Integration: Set up Apify account with Facebook Ad Library scraper access Configure API credentials and test with small ad batches Customize search parameters for your target competitors and industries AI Service Configuration: Connect OpenAI API for vision analysis and image generation Set up appropriate rate limiting to control processing costs Test the complete AI pipeline with sample competitor ads Google Services Setup: Configure Google Drive API credentials for automated folder creation Set up Google Sheets integration for campaign database management Test the complete asset organization and tracking workflow Campaign Customization: Define your brand guidelines and messaging requirements in the change request Set up variation templates for different campaign types and industries Configure batch processing limits based on your API usage requirements Production Optimization: Remove the limit node for full-scale competitive monitoring Set up automated scheduling for regular competitive intelligence gathering Monitor and optimize AI prompts based on generated creative quality Advanced Optimizations Scale the system with: Multi-Platform Scraping:** Extend to LinkedIn, Twitter, and Google Ads for comprehensive competitive intelligence Performance Tracking:** Integrate with ad platforms to track performance of generated variations Style Guide Automation:** Create industry-specific variation templates for consistent brand application A/B Testing Integration:** Automatically test generated variations against source ads for performance optimization CRM Integration:** Connect competitive intelligence data with sales and marketing systems Important Considerations API Rate Limits:** Built-in delays prevent service overload and ensure reliable operation Creative Quality:** System generates multiple variations to account for AI generation variability Legal Compliance:** Use generated variations as inspiration while respecting intellectual property rights Cost Management:** Monitor OpenAI image generation costs and adjust batch sizes accordingly Competitive Ethics:** Focus on learning from successful patterns rather than direct copying Why This System Works The competitive advantage lies in speed and scale: Minutes vs. Hours:** Generate campaign-ready creative variations in minutes instead of hours of manual work Systematic Analysis:** AI vision provides consistent, comprehensive analysis that humans might miss Organized Intelligence:** Structured asset management enables rapid campaign deployment and iteration Scalable Monitoring:** Automated competitive research that scales beyond manual capacity Quality Variations:** Multiple AI-generated options ensure high-quality creative output Check Out My Channel For more advanced automation systems and proven agency-building strategies that generate real revenue, explore my YouTube channel where I share the exact methodologies used to scale automation agencies to $72K+ monthly revenue.
by Jimleuk
This template attempts to replicate OpenAI's DeepResearch feature which, at time of writing, is only available to their pro subscribers. > An agent that uses reasoning to synthesize large amount of online information and complete multi-step research tasks for you. Source Though the inner workings of DeepResearch have not been made public, it is presumed the feature relies on the ability to deep search the web, scrape web content and invoking reasoning models to generate reports. All of which n8n is really good at! Using this workflow, n8n users can enjoy a variation of the Deep Research experience for themselves and their teams at a fraction of the cost. Better yet, learn and customise this Deep Research template for their businesses and/or organisations. Check out the generated reports here: https://jimleuk.notion.site/19486dd60c0c80da9cb7eb1468ea9afd?v=19486dd60c0c805c8e0c000ce8c87acf How it works A form is used to first capture the user's research query and how deep they'd like the researcher to go. Once submitted, a blank Notion page is created which will later hold the final report and the researcher gets to work. The user's query goes through a recursive series of web serches and web scraping to collect data on the research topic to generate partial learnings. Once complete, all learnings are combined and given to a reasoning LLM to generate the final report. The report is then written to the placeholder Notion page created earlier. How to use Duplicate this Notion database template and make sure all Notion related nodes point to it. Sign-up for APIFY.com API Key for web search and scraping services. Ensure you have access to OpenAI's o3-mini model. Alternatively, switch this out for o1 series. You must publish this workflow and ensure the form url is publically accessible. On depth & breadth configuration For more detailed reports, increase depth and breadth but be warned the workflow will take exponentially longer and cost more to complete. The recommended defaults are usually good enough. Depth=1 & Breadth=2 - will take about 5 - 10mins. Depth=1 & Breadth=3 - will take about 15 - 20mins. Dpeth=3 & Breadth=5 - will take about 2+ hours! Customising this workflow I deliberately chose not to use AI-powered scrapers like Firecrawl as I felt these were quite costly and quotas would be quickly exhausted. However, feel free to switch web search and scraping services which suit your environment. Maybe you don't decide to source the web and instead, data collection comes from internal documents instead. This template gives you freedom to change this. Experiment with different Reasoning/Thinking models such as Deepseek and Google's Gemini 2.0. Finally, the LLM prompts could definitely be improved. Refine them to fit your use-case. Credits This template is largely based off the work by David Zhang (dzhng) and his open source implementation of Deep Research: https://github.com/dzhng/deep-research
by Luciano Gutierrez
Healthcare Clinic Assistant with WhatsApp and Telegram Integration Version: 1.1.0 n8n Version: 1.88.0+ License: MIT 📋 Description A comprehensive and modular automation workflow designed for healthcare clinics. It manages patient communication, appointment scheduling, confirmations, rescheduling, internal tasks, and media processing by integrating WhatsApp, Telegram, Google Calendar, and Google Tasks, combined with AI-powered agents for maximum efficiency. This system guarantees proactive communication with patients, streamlined internal clinic management, and consistent data synchronization across platforms. 🌟 Key Features 🤖 AI-Powered Specialized Agents: Distinct agents handle WhatsApp patient support, appointment confirmations, and internal rescheduling tasks. 📱 Omnichannel Communication: Handles patient interactions via WhatsApp and staff commands via Telegram. 📅 Google Calendar Appointment Management: Full synchronization for creating, updating, canceling, and confirming appointments. 📋 Task Management with Google Tasks: Manages shopping lists and administrative tasks efficiently through staff Telegram requests. 🔔 Automated Appointment Reminders: Daily-triggered system proactively sends WhatsApp confirmations to patients for next-day appointments. 🖼️ Intelligent Media Processing: Transcribes audios, extracts text from images, and processes documents using OpenAI and OpenRouter AI models. 🛡️ Escalation to Human Support: Automatically detects sensitive or urgent cases and escalates them to a human agent when needed. 🏥 Use Cases Patient Communication:** Respond to inquiries, schedule, reschedule, and confirm appointments seamlessly via WhatsApp. Internal Clinic Operations:** Allow staff to modify appointments or add shopping list reminders directly from Telegram. Appointment Confirmation System:** Automatically contacts patients one day prior to appointments for confirmation or rescheduling. Task and Reminder Management:** Keeps clinic operations organized through automatic task management with Google Tasks. 🛠️ Technical Implementation WhatsApp Patient Interaction Flow Webhook Reception:** Incoming WhatsApp messages captured via Evolution API webhook. Message Classification:** Intelligent routing of messages based on content type (text, image, audio, document). Media Content Processing:** Audios: Download, convert, and transcribe via OpenAI Whisper. Images: Analyze and extract text/descriptions with OpenAI Vision model. Patient Request Handling:** Specialized WhatsApp assistant responds appropriately using AI prompts. Outbound Message Formatting:** Ensures messages comply with WhatsApp format standards. Message Delivery:** Sends responses back via Evolution API. Telegram Staff Management Flow Telegram Webhook Reception:** Captures messages from authorized staff accounts. Internal Assistant Processing:** Appointment Rescheduling: Identifies and updates appointments through MCP Google Calendar. Task Creation: Adds new entries to the clinic's shopping list using Google Tasks. Notifications and Confirmations:** Sends confirmations back to staff through Telegram. Appointment Reminder System Daily Trigger Activation:** Fires every weekday at 08:00 AM. Calendar Scraping:** Lists next day's appointments from Google Calendar. Patient Contact:** Sends WhatsApp confirmation messages for each appointment. Response Management:** Redirects confirmation or rescheduling replies to appropriate agents. ⚙️ Setup Instructions Import the Workflow n8n → Workflows → Import from File → Upload this JSON file. Configure Credentials Evolution API (WhatsApp Communication) Telegram Bot API (Staff Communication) Google Calendar OAuth2 (Appointment Management) Google Tasks OAuth2 (Task Management) OpenAI and OpenRouter APIs (AI Agents) PostgreSQL Database (Chat Memory) Set Sensitive Variables Replace placeholder values: {sua instância aqui} → Evolution API instance name {número_whatsapp} → WhatsApp numbers {url_do_servidor} → Server URLs {a sua apikey aqui} → API keys {seu_calendario} → Google Calendar ID Customize AI Prompts Adjust system prompts to fit your clinic’s tone, service style, and patient communication guidelines. Set clinic operating hours, escalation rules, and cancellation procedures in AI prompts. Activate and Test Simulate patient messages via WhatsApp. Test Telegram commands from staff members. Validate daily appointment reminders using the scheduled trigger. 🏷️ Tags Healthcare Clinic Management WhatsApp Integration Telegram Bot Appointment Scheduling Google Calendar Google Tasks AI Agents n8n Automation 📚 Technical Notes PostgreSQL is used for persistent chat memory across sessions. Multiple AI Models Used: OpenAI GPT-4.1-nano OpenAI GPT-4.1-mini Google Gemini 2.0 and 2.5 Full media content processing supported (audio, image, text). Compliant escalation workflows ensure patient safety and proper handoff to human staff when necessary. All sensitive patient data are securely stored inside calendar event descriptions for easy retrieval by agents. 📜 License This workflow is provided under the MIT License. Feel free to adapt and customize it for your clinic’s specific needs.
by Jez
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Uncover new business leads with this AI-Powered Prospect Discovery Agent! This n8n workflow acts as a specialized intelligent assistant that, given a business type and location, uses multiple search strategies to identify a list of potential prospect companies and their websites. Stop manually trawling through search results! This agent automates the initial phase of lead generation by: Understanding your target business profile (type, location, keywords). Strategically using web search tools (Brave Search, Google Gemini Search) to find relevant businesses. Performing quick validations to confirm relevance. Returning a clean, structured JSON list of prospect names and their website URLs. How it Works: The workflow is built around an AI agent powered by Google Gemini. This agent is equipped with tools like: Brave Web Search:** For broad initial sourcing of potential business candidates. Google Gemini Search:** For advanced, context-aware discovery and finding businesses mentioned in various online sources. Brave Local Search (Selective):** For quick verification of local presence or finding website URLs for identified names. Jina AI Web Page Scraper (Very Selective):** For extremely rapid relevance checks on uncertain websites by scanning page content for keywords. The agent's system prompt guides it to use these tools efficiently to build a list of prospects without getting bogged down in deep research on any single one at this discovery stage. Use Cases: Lead Generation:** Automatically generate lists of potential clients based on industry and location. Market Research:** Identify key players or types of businesses in a specific geographical area. Sales Development:** Provide SDRs with initial lists of companies to research further. Called as a Sub-Workflow:** Designed to be easily integrated as a "tool" into more complex orchestrating AI agents (e.g., a BNI Pitch Planner that first needs to identify who to target). Setup: Import the workflow. Configure Credentials: You'll need n8n credentials for: Google Gemini (for the Chat model and the Gemini Search/Vertex AI Search tool). Brave Search (e.g., via Smithery MCP, or adapt if you have direct API access). Jina AI (for the web scraper). Assign these to the respective nodes. Review System Prompt: The prospect_discovery_agent node contains a detailed system prompt. You can fine-tune this to adjust its search strategies or the strictness of its matching. Inputs: This workflow is triggered by an "Execute Workflow Trigger" node (prospect_discovery_workflow). It expects the following inputs: business_type (string): e.g., "artisan bakery" location_query (string): e.g., "Portland, Oregon" desired_num_prospects (number): e.g., 5 additional_keywords (string, optional): e.g., "organic, gluten-free" To Use (as a Sub-Workflow/Tool): This workflow is typically called by another n8n workflow (e.g., using a "Tool Workflow" node from the Langchain nodes). The calling workflow would provide the inputs listed above. The "Prospect Discovery" workflow will then execute and its final node (the prospect_discovery_agent) will output a JSON array of found prospects, like: [ { "business_name": "Rose Petal Bakery", "website_url": "https://rosepetalbakerypdx.com" }, { "business_name": "The Daily Bread Artisans", "website_url": "https://dailybreadpdx.com" } ] If no prospects are found, it returns an empty array []. This template provides a powerful and focused tool for automating the initial stages of prospect identification.
by Kanaka Kishore Kandregula
Boost Sales with Automated Magento 2 Product and Coupon Notifications This n8n workflow automatically posts new Magento products & coupons to Telegram while preventing duplicates. Key benefits: ✅ Increase conversions with time-sensitive alerts (creates urgency) ✅ Reduce missed opportunities with 24/7 monitoring ✅ Improve customer engagement through rich media posts ✅ Save hours per week by automating manual posting Why This Works: Triggers impulse buys with real-time notifications Eliminates human error in duplicate posting Scales effortlessly as your catalog grows Provides analytics through database tracking Perfect for e-commerce stores wanting to: Announce new arrivals instantly Promote limited-time offers effectively Maintain consistent social presence Track performance through MySQL This workflow automatically: ✅ Detects new products AND coupons in Magento ✅ Prevents duplicate postings with MySQL tracking ✅ Posts rich formatted alerts to Telegram ✅ Runs on a customizable schedule ✨ Key Features For Products: Product name, price, and image Direct store link Media gallery support For Coupons: Coupon code and status Usage limits (times used/available) Active/inactive status indicator Core System: 🔒 MySQL duplicate prevention⏰ 1-hour schedule (customizable)📱 Telegram notifications with Markdown 🛠️ Configuration Guide Database Setup CREATE TABLE IF NOT EXISTS posted_items (item_id INT PRIMARY KEY, item_type ENUM('product', 'coupon') NOT NULL, item_value VARCHAR(255), posted BOOLEAN DEFAULT FALSE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP); Required Credentials Magento API (HTTP Header Auth) MySQL Database Telegram Bot Sticky Notes `❗ IMPORTANT SETUP NOTES ❗ For products: Ensure 'url_key' exists in custom_attributes For coupons: Magento REST API must expose coupon rules MySQL user needs INSERT/SELECT privileges Telegram bot must be added to your channel first 🔄 SCHEDULING: - Default: Checks every 1 hours at :00 - Adjust in Schedule Trigger node ` ⚙️ Technical Details Workflow Logic: Checks for new products/coupons via Magento API Verifies against MySQL database Only posts if record doesn't exist Updates database after successful post Error Handling: Automatic skip if product/coupon exists Empty result handling Connection timeout protection 🌟 Why This Template? Complete Solution**: Handles both products AND coupons Battle-Tested**: Prevents all duplicates reliably Ready-to-Use**: Just add your credentials Fully Customizable**: Easy to modify for different needs Perfect for e-commerce stores using Magento 2 who want automated, duplicate-free social notifications!