by David Ashby
Complete MCP server exposing 14 doqs.dev | PDF filling API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add doqs.dev | PDF filling API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the doqs.dev | PDF filling API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.doqs.dev/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (14 total) 🔧 Designer (7 endpoints) • GET /designer/templates/: List Templates • POST /designer/templates/: Create Template • POST /designer/templates/preview: Preview • DELETE /designer/templates/{id}: Delete • GET /designer/templates/{id}: List Templates • PUT /designer/templates/{id}: Update Template • POST /designer/templates/{id}/generate: Generate Pdf 🔧 Templates (7 endpoints) • GET /templates: List • POST /templates: Create • DELETE /templates/{id}: Delete • GET /templates/{id}: Get Template • PUT /templates/{id}: Update • GET /templates/{id}/file: Get File • POST /templates/{id}/fill: Fill 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native doqs.dev | PDF filling API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Alexandra Spalato
Who's it for This workflow is for community builders, marketers, consultants, coaches, and thought leaders who want to grow their presence in Skool communities through strategic, value-driven engagement. It's especially useful for professionals who want to: Build authority in their niche by providing helpful insights Scale their community engagement without spending hours manually browsing posts Identify high-value conversation opportunities that align with their expertise Maintain authentic, helpful presence across multiple Skool communities What problem is this workflow solving Many professionals struggle to consistently engage meaningfully in online communities due to: Time constraints**: Manually browsing multiple communities daily is time-consuming Missed opportunities**: Important discussions happen when you're not online Inconsistent engagement**: Sporadic participation reduces visibility and relationship building Generic responses**: Quick replies often lack the depth needed to showcase expertise This workflow solves these problems by automatically monitoring your target Skool communities, using AI to identify posts where your expertise could add genuine value, generating thoughtful contextual comment suggestions, and organizing opportunities for efficient manual review and engagement. How it works Scheduled Community Monitoring Runs daily at 7 PM to scan your configured Skool communities for new posts and discussions from the last 24 hours. Intelligent Configuration Management Pulls settings from Airtable including target communities, your domain expertise, and preferred tools Possibility to add several configurations Filters for active configurations only Processes multiple community URLs efficiently Comprehensive Data Extraction Uses Apify Skool Scraper to collect: Post content and metadata Comments over 50 characters (quality filter) Direct links for easy access AI-Powered Opportunity Analysis Leverages OpenAI GPT-4.1 to: Analyze each post for engagement opportunities based on your expertise Identify specific trigger sentences that indicate a need you can address Generate contextual, helpful comment suggestions Maintain authentic tone without being promotional Smart Filtering and Organization Only surfaces genuine opportunities where you can add value Stores results in Airtable with detailed reasoning Provides suggested comments ready for review and posting Tracks engagement history to avoid duplicate responses Quality Control and Review All opportunities are saved to Airtable where you can: Review AI reasoning and suggested responses Edit comments before posting Track which opportunities you've acted on Monitor success patterns over time How to set up Required credentials OpenAI API key** - For GPT-4.1 powered opportunity analysis Airtable Personal Access Token** - For configuration and results storage Apify API token** - For Skool community scraping Airtable base setup Create an Airtable base with two tables: Config Table (config): Name (Single line text): Your configuration name Skool URLs (Long text): Comma-separated list of Skool community URLs cookies (Long text): Your Skool session cookies for authenticated access Domain of Activity (Single line text): Your area of expertise (e.g., "AI automation", "Digital marketing") Tools Used (Single line text): Your preferred tools to recommend (e.g., "n8n", "Zapier") active (Checkbox): Whether this configuration is currently active Results Table (Table 1): title (Single line text): Post title/author url (URL): Direct link to the post reason (Long text): AI's reasoning for the opportunity trigger (Long text): Specific sentence that triggered the opportunity suggested answer (Long text): AI-generated comment suggestion config (Link to another record): Reference to the config used date (Date): When the opportunity was found Select (Single select): Status tracking (not commented/commented) Skool cookies setup To access private Skool communities, you'll need to: Install Cookie Editor: Go to Chrome Web Store and install the "Cookie Editor" extension Login to Skool: Navigate to any Skool community you want to monitor and log in Open Cookie Editor: Click the Cookie Editor extension icon in your browser toolbar Export cookies: Click "Export" button in the extension Copy the exported text Add to Airtable: Paste the cookie string into the cookies field in your Airtable config Trigger configuration Ensure the Schedule Trigger is set to your preferred monitoring time Default is 7 PM daily, but adjust based on your target communities' peak activity Requirements Self-hosted n8n or n8n Cloud account** Active Skool community memberships** - You must be a legitimate member of communities you want to monitor OpenAI API credits** Apify subscription** - For reliable Skool data scraping (free tier available) Airtable account** - Free tier sufficient for most use cases How to customize the workflow Modify AI analysis criteria Edit the EvaluateOpportunities And Generate Comments node to: Adjust the opportunity detection sensitivity Modify the comment tone and style Add industry-specific keywords or phrases Change monitoring frequency Update the Schedule Trigger to: Multiple times per day for highly active communities Weekly for slower-moving professional groups Custom intervals based on community activity patterns Customize data collection Modify the Apify scraper settings to: Adjust the time window (currently 24 hours) Change comment length filters (currently >50 characters) Include/exclude media content Modify the number of comments per post Add additional filters Insert filter nodes to: Skip posts from specific users Focus on posts with minimum engagement levels Exclude certain post types or keywords Prioritize posts from influential community members Enhance output options Add nodes after Record Results to: Send Slack/Discord notifications for high-priority opportunities Create calendar events for engagement tasks Export daily summaries to Google Sheets Integrate with CRM systems for lead tracking Example outputs Opportunity analysis result { "opportunity": true, "reason": "The user is struggling with manual social media management tasks that could be automated using n8n workflows.", "trigger_sentence": "I'm spending 3+ hours daily just scheduling posts and responding to comments across all my social accounts.", "suggested_comment": "That sounds exhausting! Have you considered setting up automation workflows? Tools like n8n can handle the scheduling and even help with response suggestions, potentially saving you 80% of that time. The initial setup takes a day but pays dividends long-term." } Airtable record example Title: "Sarah Johnson - Social Media Burnout" URL: https://www.skool.com/community/post/123456 Reason: "User expressing pain point with manual social media management - perfect fit for automation solutions" Trigger: "I'm spending 3+ hours daily just scheduling posts..." Suggested Answer: "That sounds exhausting! Have you considered setting up automation workflows?..." Config: [Your Config Name] Date: 2024-12-09 19:00:00 Status: "not commented" Best practices Authentic engagement Always review and personalize AI suggestions before posting Focus on being genuinely helpful rather than promotional Share experiences and ask follow-up questions Engage in subsequent conversation when people respond Community guidelines Respect each community's rules and culture Avoid over-promotion of your tools or services Build relationships before introducing solutions Contribute value consistently, not just when selling Optimization tips Monitor which types of opportunities convert best A/B test different comment styles and approaches Track engagement metrics on your actual comments Adjust AI prompts based on community feedback
by Onur
Amazon Product Scraper with Scrape.do & AI Enrichment > This workflow is a fully automated Amazon product data extraction engine. It reads product URLs from a Google Sheet, uses Scrape.do to reliably fetch each product page’s HTML without getting blocked, and then applies an AI-powered extraction process to capture key product details such as name, price, rating, review count, and description. All structured results are neatly stored back into a Google Sheet for easy access and analysis. This template is designed for consistency and scalability—ideal for marketers, analysts, and e-commerce professionals who need clean product data at scale. 🚀 What does this workflow do? Reads Input URLs:** Pulls a list of Amazon product URLs from a Google Sheet. Scrapes HTML Reliably:* Uses *Scrape.do** to bypass Amazon’s anti-bot measures, ensuring the page HTML is always retrieved successfully. Cleans & Pre-processes HTML:** Strips scripts, styles, and unnecessary markup, isolating only relevant sections like title, price, ratings, and feature bullets. AI-Powered Data Extraction:** A LangChain/OpenRouter GPT-4 node verifies and enriches key fields—product name, price, rating, reviews, and description. Stores Structured Results:** Appends all extracted and verified product data to a results tab in Google Sheets. Batch & Loop Control:** Handles multiple URLs efficiently with Split In Batches to process as many products as you need. 🎯 Who is this for? E-commerce Sellers & Dropshippers:** Track competitor prices, ratings, and key product features automatically. Marketing & SEO Teams:** Collect product descriptions and reviews to optimize campaigns and content. Analysts & Data Teams:** Build accurate product databases without manual copy-paste work. ✨ Benefits High Success Rate:* *Scrape.do** handles proxy rotation and CAPTCHA challenges automatically, outperforming traditional scrapers. AI Validation:** LLM verification ensures data accuracy and fills in gaps when HTML elements vary. Full Automation:** Runs on-demand or on a schedule to keep product datasets fresh. Clean Output:** Results are neatly organized in Google Sheets, ready for reporting or integration with other tools. ⚙️ How it Works Manual or Scheduled Trigger: Start the workflow manually or via a cron schedule. Input Source: Fetch URLs from a Google Sheet (TRACK_SHEET_GID). Scrape with Scrape.do: Retrieve full HTML from each Amazon product page using your SCRAPEDO_TOKEN. Clean & Pre-Extract: Strip irrelevant code and use regex to pre-extract key fields. AI Extraction & Verification: LangChain GPT-4 model refines and validates product name, description, price, rating, and reviews. Save Results: Append enriched product data to the results sheet (RESULTS_SHEET_GID). 📋 n8n Nodes Used Manual Trigger / Schedule Trigger Google Sheets (read & append) Split In Batches HTTP Request (Scrape.do) Code (clean & pre-extract HTML) LangChain LLM (OpenRouter GPT-4) Structured Output Parser 🔑 Prerequisites Active n8n instance. Scrape.do API token** (bypasses Amazon anti-bot measures). Google Sheets** with: TRACK_SHEET_GID: tab containing product URLs. RESULTS_SHEET_GID: tab for results. Google Sheets OAuth2 credentials** shared with your service account. OpenRouter / OpenAI API credentials** for the GPT-4 model. 🛠️ Setup Import the Workflow into your n8n instance. Set Workflow Variables: SCRAPEDO_TOKEN – your Scrape.do API key. WEB_SHEET_ID – Google Sheet ID. TRACK_SHEET_GID – sheet/tab name for input URLs. RESULTS_SHEET_GID – sheet/tab name for results. Configure Credentials for Google Sheets and OpenRouter. Map Columns in the “add results” node to match your Google Sheet (e.g., name, price, rating, reviews, description). Run or Schedule: Start manually or configure a schedule for continuous data extraction. This Amazon Product Scraper delivers fast, reliable, and AI-enriched product data, ensuring your e-commerce analytics, pricing strategies, or market research stay accurate and fully automated.
by Omer Fayyaz
Efficient loop-less N8N Workflow Backup Automation to Google Drive This workflow eliminates traditional loop-based processing entirely, delivering unprecedented performance and reliability even when the number of workflows to be processed are large What Makes This Different: NO SplitInBatches node** - Traditional workflows use loops to process workflows one by one NO individual file uploads** - No more multiple Google Drive API calls NO batch error handling** - Eliminates complex loop iteration error management ALL workflows processed simultaneously** - Revolutionary single-operation approach Single compressed archive** - One ZIP file containing all workflows One Google Drive upload** - Maximum efficiency, minimum API usage Key Benefits of Non-Loop Architecture: 3-5x Faster Execution** - Eliminated loop overhead and multiple API calls Reduced API Costs** - Single upload instead of dozens of individual operations Higher Reliability** - Fewer failure points, centralized error handling Better Scalability** - Performance doesn't degrade with workflow count Large Workflow Support* - *Efficiently handles hundreds of workflows without performance degradation** Easier Maintenance** - Simpler workflow structure, easier debugging Cleaner Monitoring** - Single success/failure point instead of loop iterations Who's it for This template is designed for n8n users, DevOps engineers, system administrators, and IT professionals who need reliable, automated backup solutions for their n8n workflows. It's perfect for businesses and individuals who want to ensure their workflow automation investments are safely backed up with intelligent scheduling, compression, and cloud storage integration. How it works / What it does This workflow creates an intelligent, automated backup system that transforms n8n workflow backups from inefficient multi-file operations into streamlined single-archive automation. The system: Triggers automatically every 4 hours or manually on-demand Creates timestamped folders in Google Drive for organized backup storage Retrieves all n8n workflows via the n8n API in a single operation Converts workflows to JSON and aggregates binary data efficiently Compresses all workflows into a single ZIP archive (eliminating the need for loops) Uploads the compressed backup to Google Drive in one operation Provides real-time Slack notifications for monitoring and alerting Key Innovation: No Loops Required - Unlike traditional backup workflows that use SplitInBatches or loops to process workflows individually, this system processes all workflows simultaneously and creates a single compressed archive, dramatically improving performance and reliability. How to set up 1. Configure Google Drive API Credentials Set up Google Drive OAuth2 API credentials Ensure the service account has access to create folders and upload files Update the parent folder ID where backup folders will be created 2. Configure n8n API Access Set up internal n8n API credentials for workflow retrieval Ensure the API has permissions to read all workflows Configure retry settings for reliability 3. Set up Slack Notifications Configure Slack API credentials for the notification channel Update the channel ID where backup notifications will be sent Customize notification messages as needed 4. Schedule Configuration The workflow automatically runs every 4 hours Manual execution is available for immediate backups Adjust the schedule in the Schedule Trigger node as needed 5. Test the Integration Run a manual backup to verify all components work correctly Check Google Drive for the created backup folder and ZIP file Verify Slack notifications are received Requirements n8n instance** (self-hosted or cloud) with API access Google Drive account** with API access and sufficient storage Slack workspace** for notifications (optional but recommended) n8n workflows** that need regular backup protection How to customize the workflow Modify Backup Frequency Adjust the Schedule Trigger node for different intervals (hourly, daily, weekly) Add multiple schedule triggers for different backup types Implement conditional scheduling based on workflow changes Enhance Storage Strategy Add multiple Google Drive accounts for redundancy Implement backup rotation and retention policies Add compression options (ZIP, TAR, 7Z) for different use cases Expand Notification System Add email notifications for critical backup events Integrate with monitoring systems (PagerDuty, OpsGenie) Add backup success/failure metrics and reporting Security Enhancements Implement backup encryption before upload Add backup verification and integrity checks Set up access logging and audit trails Performance Optimizations Add parallel processing for large workflow collections Implement incremental backup strategies Add backup size monitoring and alerts Key Features Zero-loop architecture** - Processes all workflows simultaneously without batch processing Intelligent compression** - Single ZIP archive instead of multiple individual files Automated scheduling** - Runs every 4 hours with manual override capability Organized storage** - Timestamped folders with clear naming conventions Real-time monitoring** - Slack notifications for all backup events Error handling** - Centralized error management with graceful failure handling Scalable design** - Handles any number of workflows efficiently Technical Architecture Highlights Eliminated Inefficiencies No SplitInBatches node** - Replaced with direct workflow processing No individual file uploads** - Single compressed archive upload No loop iterations** - All workflows processed in one operation No batch error handling** - Centralized error management Performance Improvements Faster execution** - Eliminated loop overhead and multiple API calls Reduced API quota usage** - Single Google Drive upload per backup Better resource utilization** - Efficient memory and processing usage Improved reliability** - Fewer points of failure in the workflow Data Flow Optimization Parallel processing** - Folder creation and workflow retrieval happen simultaneously Efficient aggregation** - Code node processes all binaries at once Smart compression** - Single ZIP with all workflows included Streamlined upload** - One file transfer instead of multiple operations Use Cases Production n8n instances** requiring reliable backup protection Development teams** needing workflow version control and recovery DevOps automation** requiring disaster recovery capabilities Business continuity** planning for critical automation workflows Compliance requirements** for workflow documentation and backup Team collaboration** with shared workflow backup access Business Value Risk Mitigation** - Protects valuable automation investments Operational Efficiency** - Faster, more reliable backup processes Cost Reduction** - Lower storage costs and API usage Compliance Support** - Organized backup records for audits Team Productivity** - Reduced backup management overhead Scalability** - Handles growth without performance degradation This template revolutionizes n8n workflow backup by eliminating the complexity and inefficiency of traditional loop-based approaches, providing a robust, scalable solution that grows with your automation needs while maintaining the highest levels of reliability and performance.
by Jaruphat J.
⚠️ Note: All sensitive credentials should be set via n8n Credentials or environment variables. Do not hardcode API keys in nodes. Who’s it for Marketers, creators, and automation builders who want to generate UGC-style ad images and short videos automatically from a Google Sheet. Ideal for e‑commerce SKUs, agencies, or teams that need many variations quickly. What it does (Overview) This template turns a spreadsheet row into ad images and optionally 5–8s videos. Zone 0 — Image-only pipeline (Gemini/OpenRouter)**: Creates an ad image from a product link and prompt, uploads it to Drive, and updates the sheet (no video step). Zone 1 — Create image (Fal nano‑banana) + prepare for video**: Generates an image via Fal.ai, polls status, fetches URL, then analyzes the image with LLM to prepare scene prompts. Zone 2 — Generate video (WAN2.2 & Veo3)**: Uses the generated image + structured scene prompts to create short clips, uploads them to Drive, and writes the video URL back to the sheet. Requirements Fal.ai API key** (env: FAL_KEY) Google Sheets / Google Drive** OAuth2 credentials OpenAI / Gemini (via OpenRouter)** for image analysis or alternative image generation A Google Sheet with columns, e.g.: product | presenter | prompt | img_url | video_url Google Drive files set to Anyone with link → Viewer so APIs can fetch them How to set up Credentials: Add Google Sheets + Google Drive (OAuth2), Fal.ai (Header Auth with Authorization: Key {{$env.FAL_KEY}}), and OpenAI/OpenRouter. Google Sheet: Create the columns above. Paste product image Drive links (the workflow converts them to direct links automatically). Import the workflow: Use the provided JSON. Confirm node credentials resolve. Run: Start with Zone 0 to verify image-only flow, then test Zone 1 + Zone 2 for full image→video. Zone 0 — Create Ad Image (Image-only) This path is for creating just an image and stopping. It reads the Gemini tab in the Sheet, generates an image via OpenRouter/Gemini, converts base64 to a file, uploads to Drive, and writes back img_url. Key nodes Get Data1 (Google Sheets)** → reads Gemini tab setImgeURL (Set)** → converts Drive URLs to direct (uc?export=view&id=...) CreateImagebyOpernRouter (Gemini)** → calls google/gemini-2.5-flash-image-preview:free wait20sec (Wait)** → small delay setBase64data (Code)** → splits data URI into { data, mimeType, fileName } Convert to File** → creates binary uploadImagetoGdrive (Google Drive)** → uploads image updateImageURL (Google Sheets)** → writes back img_url Zone 1 — Create Image (Fal nano‑banana) + Prepare for Video Reads product rows, normalizes Drive links, generates image with Fal nano‑banana, polls until complete, fetches the output image URL, then runs an image analysis (OpenAI Vision) to prepare structured text for the video step. Key nodes Get Data (Google Sheets)** → reads nanoBanana tab Edit Fields (Set)** → converts Drive links to direct (uc?export=view&id=...) Call Fal.ai API (nanoBanana)** → POST https://queue.fal.run/fal-ai/nano-banana/edit Get image status / If / Wait / Get the image** → job polling until complete Analyze image (OpenAI Vision)** → returns structured description (brand text, colors, type, short description) Zone 2 — Generate Video (WAN2.2 & Veo3) Creates a 5–8s UGC clip using the generated image + structured scene prompt. Key nodes Describe Each Scene for Video (AI Agent)** → expands analysis + user intent into detailed scene sections (Characters, Scene Background, Camera Movement, Movement in Scene, Sound Design) Structured Output Parser2 (Schema)** → enforces consistent JSON structure Veo3 (HTTP)** → POST /fal-ai/veo3/image-to-video with prompt + image_url Call Fal.ai API (WAN2.2) [Optional]** → POST /fal-ai/wan/v2.2-a14b/image-to-video Wait for the video / Get the video status / Video status / Get the video** → polling loop HTTP Request (Download File)** → downloads MP4 uploadImagetoGdrive1 (Google Drive)** → uploads video updateVideoURL (Google Sheets)** → writes back video_url Node settings (high‑level) Drive Link Parser (Set)** {{ (() => { const u = $json.product || ''; const q = u.match(/[?&]id=([-\w]{25,})/); const d = u.match(/\/d\/([-\w]{25,})/); const any = u.match(/[-\w]{25,}/); const id = q?.[1] || d?.[1] || (any ? any[0] : ''); return id ? 'https://drive.google.com/uc?export=view&id=' + id : ''; })() }} How to customize the workflow Adjust AI prompts to change ad style (funny, luxury, cozy, techy). Change video aspect ratio for TikTok/IG/Shorts (9:16, 1:1, 16:9). Extend Sheet schema for campaign labels, audiences, hashtags. Add distribution (Slack/LINE/Telegram) after Drive upload. Troubleshooting JSON parameter needs to be valid JSON** → Ensure expressions return objects, not strings. 403 on images** → Make Drive files public (Viewer) and convert links. Video never completes* → Check status_url, retry with -fast models or off‑peak times. Template metadata Uses:** Google Sheets, Google Drive, HTTP Request, Wait/If/Switch, Code, Convert to File, OpenAI/Gemini (optional), Fal.ai models (nano‑banana, WAN2.2, Veo3) Source workflow JSON:** Gemini\_NanoBanana\_Template.json (node names and connections match) Product Image Product Image - nano Banana Product Video - Veo3 Product Video - Wan2.2
by Daniel Shashko
How it Works This workflow automates competitive price intelligence using Bright Data's enterprise web scraping API. On a scheduled basis (default: daily at 9 AM), the system loops through configured competitor product URLs, triggers Bright Data's web scraper to extract real-time pricing data from each site, and intelligently compares competitor prices against your current pricing. The workflow handles the full scraping lifecycle: it sends scraping requests to Bright Data, waits for completion, fetches the scraped product data, and parses prices from various formats and website structures. All pricing data is automatically logged to Google Sheets for historical tracking and trend analysis. When a competitor's price drops below yours by more than the configured threshold (e.g., 10% cheaper), the system immediately sends detailed alerts via Slack and email to your pricing team with actionable intelligence. At the end of each monitoring run, the workflow generates a comprehensive daily summary report that aggregates all competitor data, calculates average price differences, identifies the lowest and highest competitors, and provides a complete competitive landscape view. This eliminates hours of manual competitor research and enables data-driven pricing decisions in real-time. Who is this for? E-commerce businesses and online retailers needing automated competitive price monitoring Product managers and pricing strategists requiring real-time competitive intelligence Revenue operations teams managing dynamic pricing strategies across multiple products Marketplaces competing in price-sensitive categories where margins matter Any business that needs to track competitor pricing without manual daily checks Setup Steps Setup time: Approx. 30-40 minutes (Bright Data configuration, credential setup, competitor URL configuration) Requirements: Bright Data account with Web Scraper API access Bright Data API token (from dashboard) Google account with a spreadsheet for price tracking Slack workspace with pricing channels SMTP email provider for alerts Sign up for Bright Data and create a web scraping dataset (use e-commerce template for product data) Obtain your Bright Data API token and dataset ID from the dashboard Configure these nodes: Schedule Daily Check: Set monitoring frequency using cron expression (default: 9 AM daily) Load Competitor URLs: Add competitor product URLs array, configure your current price, set alert threshold percentage Loop Through Competitors: Automatically handles multiple URLs (no configuration needed) Scrape with Bright Data: Add Bright Data
by Elay Guez
🔍 AI-Powered Web Research in Google Sheets with Bright Data 📋 Overview Transform any Google Sheets cell into an intelligent web scraper! Type =BRIGHTDATA("cell", "search prompt") and get AI-filtered result from every website in ~20 seconds. What happens automatically: AI optimizes your search query Bright Data scrapes the web (bypasses bot detection) AI analyzes and filters result Returns clean data directly to your cell Completes in <25 seconds Cost: ~$0.02-0.05 per search | Time saved: 3-5 minutes per search 👥 Who's it for Market researchers needing competitive intelligence E-commerce teams tracking prices Sales teams doing lead prospecting SEO specialists gathering content research Real estate agents monitoring listings Anyone tired of manual copy-paste ⚙️ How it works Webhook Call - Google Sheets function sends POST request Data Preparation - Organizes input structure AI Query Optimization - GPT-4.1 Mini refines search query Web Scraping - Bright Data fetches data while bypassing blocks AI Analysis - GPT-4o Mini filters and summarizes result Response - Returns plain text to your cell Logging - Updates logs for monitoring 🛠️ Setup Instructions Time to deploy: 20 minutes Requirements n8n instance with public URL Bright Data account + API key OpenAI API key Google account for Apps Script Part 1: n8n Workflow Setup Import this template into your n8n instance Configure Webhook node: Copy your webhook URL: https://n8n.yourdomain.com/webhook/brightdata-search Set authentication: Header Auth Set API key: 12312346 (or create your own) Add OpenAI credentials to AI nodes. Configure Bright Data: Add API credentials Configure Output Language: Manually edit the "Set Variables" Node. Test workflow with manual execution Activate the workflow Part 2: Google Sheets Function Open Google Sheet → Extensions → Apps Script Paste this code: function BRIGHTDATA(prompt, source) { if (!prompt || prompt === "") { return "❌ Must enter prompt"; } source = source || "google"; // Update with YOUR webhook URL const N8N_WEBHOOK_URL = "https://your-n8n-domain.com/webhook/brightdata-search"; // Update with YOUR password const API_KEY = "12312346"; let spreadsheetId, sheetName, cellAddress; try { const sheet = SpreadsheetApp.getActiveSheet(); const activeCell = sheet.getActiveCell(); spreadsheetId = SpreadsheetApp.getActiveSpreadsheet().getId(); sheetName = sheet.getName(); cellAddress = activeCell.getA1Notation(); } catch (e) { return "❌ Cannot identify cell"; } const payload = { prompt: prompt, source: source.toLowerCase(), context: { spreadsheetId: spreadsheetId, sheetName: sheetName, cellAddress: cellAddress, timestamp: new Date().toISOString() } }; const options = { method: "post", contentType: "application/json", payload: JSON.stringify(payload), muteHttpExceptions: true, headers: { "Accept": "text/plain", "key": API_KEY } }; try { const response = UrlFetchApp.fetch(N8N_WEBHOOK_URL, options); const responseCode = response.getResponseCode(); if (responseCode !== 200) { Logger.log("Error response: " + response.getContentText()); return "❌ Error " + responseCode; } return response.getContentText(); } catch (error) { Logger.log("Exception: " + error.toString()); return "❌ Connection error: " + error.toString(); } } function doGet(e) { return ContentService.createTextOutput(JSON.stringify({ status: "alive", message: "Apps Script is running", timestamp: new Date().toISOString() })).setMimeType(ContentService.MimeType.JSON); } Update N8N_WEBHOOK_URL with your webhook Update API_KEY with your password Save (Ctrl+S / Cmd+S) - Important! Close Apps Script editor 💡 Usage Examples =BRIGHTDATA("C3", "What is the current price of the product?") =BRIGHTDATA("D30", "What is the size of this company?") =BRIGHTDATA("A4", "Do this comapny is hiring Developers?") 🎨 Customization Easy Tweaks AI Models** - Switch to GPT-4o for better optimization Response Format** - Modify prompt for specific outputs Speed** - Optimize AI prompts to reduce time Language** - Change prompts for any language Advanced Options Implement rate limiting Add data validation Create async mode for long queries Add Slack notifications 🚀 Pro Tips Be Specific** - "What is iPhone 15 Pro 256GB US price?" beats "What is iPhone price?" Speed Matters** - Keep prompts concise (30s timeout limit) Monitor Costs** - Track Bright Data usage Debug** - Check workflow logs for errors ⚠️ Important Notes Timeout:** 30-second Google Sheets limit (aim for <20s) Plain Text Only:** No JSON responses Costs:** Monitor Bright Data at console.brightdata.com Security:** Keep API keys secret No Browser Storage:** Don't use localStorage/sessionStorage 🔧 Troubleshooting | Error | Solution | |-------|----------| | "Exceeded maximum execution time" | Optimize AI prompts or use async mode | | "Could not fetch data" | Verify Bright Data credentials | | Empty cell | Check n8n logs for AI parsing issues | | Broken characters | Verify UTF-8 encoding in webhook node | 📚 Resources Bright Data API Docs n8n Webhook Documentation Google Apps Script Reference Built with ❤️ by Elay Guez
by Yves Tkaczyk
Use cases Monitor Google Drive folder, parsing PDF, DOCX and image file into a destination folder, ready for further processing (e.g. RAG ingestion, translation, etc.) Keep processing log in Google Sheet and send Slack notifications. How it works Trigger: Watch Google Drive folder for new and updated files. Create a uniquely named destination folder, copying the input file. Parse the file using Mistral Document, extracting content and handling non-OCRable images separately. Save the data returned by Mistral Document into the destination Google Drive folder (raw JSON file, Markdown files, and images) for further processing. How to use Google Drive and Google Sheets nodes: Create Google credentials with access to Google Drive and Google Sheets. Read more about Google Credentials. Update all Google Drive and Google Sheets nodes (14 nodes total) to use the credentials Mistral node: Create Mistral Cloud API credentials. Read more about Mistral Cloud Credentials. Update the OCR Document node to use the Mistral Cloud credentials. Slack nodes: Create Slack OAuth2 credentials. Read more about Slack OAuth2 credentials Update the two Slack nodes: Send Success Message and Send Error Message: Set the credentials Select the channel where you want to send the notifications (channels can be different for success and errors). Create a Google Sheets spreadsheet following the steps in Google Sheets Configuration. Ensure the spreadsheet can be accessed as Editor by the account used by the Google Credentials above. Create a directory for input files and a directory for output folders/files. Ensure the directories can be accessed by the account used by the Google Credentials. Update the File Created, File Updated and Workflow Configuration node following the steps in the green Notes. Requirements Google account with Google API access Mistral Cloud account access to Mistral API key. Slack account with access to Slack client ID and secret ID. Basic n8n knowledge: understanding of triggers, expressions, and credential management Who’s it for Anyone building a data pipeline ingesting files to be OCRed for further processing. 🔒 Security All credentials are stored as n8n credentials. The only information stored in this workflow that could be considered sensitive are the Google Drive Directory and Sheet IDs. These directories and the spreadsheet should be secured according to your needs. Need Help? Reach out on LinkedIn or Ask in the Forum!
by takuma
This workflow automates reputation management for physical stores (restaurants, retail, clinics) by monitoring Google Maps reviews, analyzing them with AI, and drafting professional replies. It acts as a 24/7 customer support assistant, ensuring you never miss a negative review and saving hours of manual writing time. Who is this for? Store Managers & Owners:** Keep track of customer sentiment without manually checking Google Maps every day. Marketing Agencies:** Automate local SEO reporting and response drafting for multiple clients. Customer Support Teams:** Get instant alerts for negative feedback to resolve issues quickly. How it works Schedule: Runs every 24 hours (customizable) to fetch the latest data. Scrape: Uses Apify to retrieve the latest reviews from a specific Google Maps URL. Filter: Checks the Google Sheet database to identify only new reviews and avoid duplicates. AI Analysis: An AI Agent (via OpenRouter/OpenAI) analyzes the review text to: Generate a short summary. Draft a polite, context-aware reply based on the star rating (e.g., apologies for low stars, gratitude for high stars). Alert: Sends a Slack notification. Low Rating (<4 stars): Alerts a specific channel (e.g., #customer-support) with a warning. High Rating: Alerts a general channel (e.g., #wins) to celebrate. Save: Appends the review details, AI summary, and draft reply to the Google Sheet. Requirements n8n:** Cloud or self-hosted (v1.0+). Apify Account:* To run the *Google Maps Reviews Scraper. Google Cloud Platform:** Enabled Google Sheets API. Slack Workspace:** A webhook URL or OAuth connection. OpenRouter (or OpenAI) API Key:** For the LLM generation. How to set up Google Sheets: Create a new sheet with the following headers in the first row: reviewId, publishedAt, reviewerName, stars, text, ai_summary, ai_reply, reviewUrl, output, publishedAt date. Configure Credentials: Set up your accounts for Google Sheets, Apify, Slack, and OpenRouter within n8n. Edit the "CONFIG" Node: MAPS_URL: Paste the full Google Maps link to your store. SHEET_ID: Paste the ID found in your Google Sheet URL. SHOP_NAME: Your store's name. Slack Nodes: Select the appropriate channels for positive and negative alerts. How to customize Change the AI Persona:* Open the *AI Agent** node and modify the "System Message" to match your brand's tone of voice (e.g., casual, formal, or witty). Adjust Alert Thresholds:* Edit the *If Rating < 4** node to change the criteria for what constitutes a "negative" review (e.g., strictly < 3 stars). Multi-Store Support:** You can loop this workflow over a list of URLs to manage multiple locations in a single execution.
by Rahul Joshi
📊 Description Streamline sales prioritization by automatically identifying, scoring, and routing high-value leads from GoHighLevel CRM to your sales team. This workflow scores contacts daily, flags top prospects, alerts sales reps in Slack, logs data to Google Sheets, and schedules instant follow-ups in Google Calendar — ensuring no valuable lead slips through the cracks. 🚀📈 What This Template Does Triggers daily at 8:00 AM to fetch all contacts from GoHighLevel CRM. ⏰ Processes lead data and extracts key details from custom fields. 🧩 Calculates lead scores using your predefined CRM field mappings. 🔢 Filters out incomplete or invalid contacts to ensure clean data flow. 🧼 Identifies high-value leads with a score above 80 for immediate attention. 🎯 Sends real-time Slack alerts to sales teams with contact and lead score details. 💬 Logs high-priority leads into a dedicated Google Sheet for tracking and analytics. 📊 Creates automatic Google Calendar follow-up events within 1 hour of detection. 📅 Key Benefits ✅ Automatically surfaces top leads for faster follow-up ✅ Keeps sales teams aligned through instant Slack alerts ✅ Eliminates manual data review and prioritization ✅ Centralizes performance tracking via Google Sheets ✅ Ensures consistent follow-up with Google Calendar scheduling ✅ Fully customizable lead score threshold and timing Features Daily scheduled trigger (8:00 AM) GoHighLevel CRM integration for contact retrieval Smart lead scoring via custom field mapping Conditional filtering for high-value leads Slack alert system for real-time engagement Google Sheets logging for transparency and analytics Auto-created Google Calendar events for follow-ups Requirements GoHighLevel API credentials with contact read permissions Slack Bot token with chat:write access Google Sheets OAuth2 credentials Google Calendar OAuth2 credentials Defined custom fields for Lead Score and Assigned Representative in GoHighLevel Target Audience Sales and business development teams tracking high-value leads Marketing teams optimizing lead qualification and follow-up Agencies using GoHighLevel for CRM and lead management Operations teams centralizing sales activity and analytics Step-by-Step Setup Instructions Connect your GoHighLevel OAuth2 credentials and ensure contact API access. Replace placeholder custom field IDs (Lead Score & Assigned Rep) in the Code node. Add your Slack channel ID for team notifications. Connect your Google Sheets document and replace its Sheet ID in the workflow. Link Google Calendar for automatic follow-up event creation. Adjust the lead score threshold (default: 80) if needed. Run a manual test to verify data flow, then enable the daily trigger for automation.
by Le Nguyen
PDF Invoice Extractor (AI) End-to-end pipeline: Watch Drive ➜ Download PDF ➜ OCR text ➜ AI normalize to JSON ➜ Upsert Buyer (Account) ➜ Create Opportunity ➜ Map Products ➜ Create OLI via Composite API ➜ Archive to OneDrive. Node by node (what it does & key setup) 1) Google Drive Trigger Purpose**: Fire when a new file appears in a specific Google Drive folder. Key settings**: Event: fileCreated Folder ID: google drive folder id Polling: everyMinute Creds: googleDriveOAuth2Api Output**: Metadata { id, name, ... } for the new file. 2) Download File From Google Purpose**: Get the file binary for processing and archiving. Key settings**: Operation: download File ID: ={{ $json.id }} Creds: googleDriveOAuth2Api Output**: Binary (default key: data) and original metadata. 3) Extract from File Purpose**: Extract text from PDF (OCR as needed) for AI parsing. Key settings**: Operation: pdf OCR: enable for scanned PDFs (in options) Output**: JSON with OCR text at {{ $json.text }}. 4) Message a model (AI JSON Extractor) Purpose: Convert OCR text into **strict normalized JSON array (invoice schema). Key settings**: Node: @n8n/n8n-nodes-langchain.openAi Model: gpt-4.1 (or gpt-4.1-mini) Message role: system (the strict prompt; references {{ $json.text }}) jsonOutput: true Creds: openAiApi Output (per item): $.message.content → the parsed **JSON (ensure it’s an array). 5) Create or update an account (Salesforce) Purpose: Upsert **Buyer as Account using an external ID. Key settings**: Resource: account Operation: upsert External Id Field: tax_id__c External Id Value: ={{ $json.message.content.buyer.tax_id }} Name: ={{ $json.message.content.buyer.name }} Creds: salesforceOAuth2Api Output: Account record (captures Id) for downstream **Opportunity. 6) Create an opportunity (Salesforce) Purpose**: Create Opportunity linked to the Buyer (Account). Key settings**: Resource: opportunity Name: ={{ $('Message a model').item.json.message.content.invoice.code }} Close Date: ={{ $('Message a model').item.json.message.content.invoice.issue_date }} Stage: Closed Won Amount: ={{ $('Message a model').item.json.message.content.summary.grand_total }} AccountId: ={{ $json.id }} (from Upsert Account output) Creds: salesforceOAuth2Api Output**: Opportunity Id for OLI creation. 7) Build SOQL (Code / JS) Purpose: Collect unique product **codes from AI JSON and build a SOQL query for PricebookEntry by Pricebook2Id. Key settings**: pricebook2Id (hardcoded in script): e.g., 01sxxxxxxxxxxxxxxx Source lines: $('Message a model').first().json.message.content.products Output**: { soql, codes } 8) Query PricebookEntries (Salesforce) Purpose**: Fetch PricebookEntry.Id for each Product2.ProductCode. Key settings**: Resource: search Query: ={{ $json.soql }} Creds: salesforceOAuth2Api Output**: Items with Id, Product2.ProductCode (used for mapping). 9) Code in JavaScript (Build OLI payloads) Purpose: Join lines with PBE results and Opportunity Id ➜ build **OpportunityLineItem payloads. Inputs**: OpportunityId: ={{ $('Create an opportunity').first().json.id }} Lines: ={{ $('Message a model').first().json.message.content.products }} PBE rows: from previous node items Output**: { body: { allOrNone:false, records:[{ OpportunityLineItem... }] } } Notes**: Converts discount_total ➜ per-unit if needed (currently commented for standard pricing). Throws on missing PBE mapping or empty lines. 10) Create Opportunity Line Items (HTTP Request) Purpose**: Bulk create OLIs via Salesforce Composite API. Key settings**: Method: POST URL: https://<your-instance>.my.salesforce.com/services/data/v65.0/composite/sobjects Auth: salesforceOAuth2Api (predefined credential) Body (JSON): ={{ $json.body }} Output**: Composite API results (per-record statuses). 11) Update File to One Drive Purpose: Archive the **original PDF in OneDrive. Key settings**: Operation: upload File Name: ={{ $json.name }} Parent Folder ID: onedrive folder id Binary Data: true (from the Download node) Creds: microsoftOneDriveOAuth2Api Output**: Uploaded file metadata. Data flow (wiring) Google Drive Trigger → Download File From Google Download File From Google → Extract from File → Update File to One Drive Extract from File → Message a model Message a model → Create or update an account Create or update an account → Create an opportunity Create an opportunity → Build SOQL Build SOQL → Query PricebookEntries Query PricebookEntries → Code in JavaScript Code in JavaScript → Create Opportunity Line Items Quick setup checklist 🔐 Credentials: Connect Google Drive, OneDrive, Salesforce, OpenAI. 📂 IDs: Drive Folder ID (watch) OneDrive Parent Folder ID (archive) Salesforce Pricebook2Id (in the JS SOQL builder) 🧠 AI Prompt: Use the strict system prompt; jsonOutput = true. 🧾 Field mappings: Buyer tax id/name → Account upsert fields Invoice code/date/amount → Opportunity fields Product name must equal your Product2.ProductCode in SF. ✅ Test: Drop a sample PDF → verify: AI returns array JSON only Account/Opportunity created OLI records created PDF archived to OneDrive Notes & best practices If PDFs are scans, enable OCR in Extract from File. If AI returns non-JSON, keep “Return only a JSON array” as the last line of the prompt and keep jsonOutput enabled. Consider adding validation on parsing.warnings to gate Salesforce writes. For discounts/taxes in OLI: Standard OLI fields don’t support per-line discount amounts directly; model them in UnitPrice or custom fields. Replace the Composite API URL with your org’s domain or use the Salesforce node’s Bulk Upsert for simplicity.
by KlickTipp
Community Node Disclaimer This workflow uses KlickTipp community nodes, available for self-hosted n8n instances only. Who’s it for Digital marketers, social media managers, and coaches who engage leads through Instagram DMs and want to automate personalized outreach, lead enrichment, and segmentation in KlickTipp — without manual follow-ups or data entry. How it works This workflow creates a complete Instagram-to-email enrichment loop — starting with personalized DM outreach, capturing responses via JotForm, enriching profile data, and syncing everything with KlickTipp. When a workflow trigger or campaign action occurs, it: Sends a personalized Instagram DM inviting the user to fill out a JotForm. Listens for form submissions in real time. Retrieves the lead’s Instagram profile data via the Facebook Graph API. Matches the username to the Instagram DM ID in a Google Sheet. Generates AI-powered marketing insights using OpenAI. Subscribes or updates the lead in KlickTipp, mapping enriched fields and tags. The result: every DM-initiated lead is captured, analyzed, and segmented — ready for intelligent follow-ups and personalized campaigns. How to set up Connect accounts for KlickTipp, JotForm, Google Sheets, Facebook Graph API, and OpenAI. Set up a KlickTipp tag or campaign trigger to initiate the DM sending. Create KlickTipp fields for Instagram data (e.g., Bio, Follower count, Insights). Add tags: Instagram | Outreach, Instagram | Enrichment, Instagram | Username found. Test a sample flow: send a DM → fill the JotForm → verify data enrichment in KlickTipp. 💡 Pro Tip: Personalize the DM message template and test both personal and business accounts to ensure optimal engagement and AI insight quality. Requirements Meta (Instagram) Business Account Facebook Graph API with instagram_basic and pages_show_list permissions KlickTipp account with API access OpenAI connection (gpt-4.1-mini model) (Optional) Active Instagram Page connected to your Facebook App for DM messaging How to customize Adjust DM content and message timing for different campaigns or audiences. Edit tags and field mappings in KlickTipp to match your segmentation logic. Modify the AI prompt to emphasize tone, purchase intent, or niche interests. Add conditional logic (e.g., followers > 1,000 → influencer tag). Extend the flow to LinkedIn, website tracking, or CRM syncing for multi-channel enrichment.