by Incrementors
Google Play Review Intelligence with Bright Data & Telegram Alerts Overview This n8n workflow automates the process of scraping Google Play Store reviews, analyzing app performance, and sending alerts for low-rated applications. It integrates with Bright Data for web scraping, Google Sheets for data storage, and Telegram for notifications. Workflow Components 1. โ Trigger Input Form Type:** Form Trigger Purpose:** Initiates the workflow with user input Input Fields:** URL (Google Play Store app URL) Number of reviews to fetch Function:** Captures user requirements to start the scraping process 2. ๐ Start Scraping Request Type:** HTTP Request (POST) Purpose:** Sends scraping request to Bright Data API Endpoint:** https://api.brightdata.com/datasets/v3/trigger Parameters:** Dataset ID: gd_m6zagkt024uwvvwuyu Include errors: true Limit multiple results: 5 Custom Output Fields:** url, review_id, reviewer_name, review_date review_rating, review, app_url, app_title app_developer, app_images, app_rating app_number_of_reviews, app_what_new app_content_rating, app_country, num_of_reviews 3. ๐ Check Scrape Status Type:** HTTP Request (GET) Purpose:** Monitors the progress of the scraping job Endpoint:** https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function:** Checks if the dataset scraping is complete 4. โฑ๏ธ Wait for Response 45 sec Type:** Wait Node Purpose:** Implements polling mechanism Duration:** 45 seconds Function:** Pauses workflow before checking status again 5. ๐งฉ Verify Completion Type:** IF Condition Purpose:** Evaluates scraping completion status Condition:** status === "ready" Logic:** True: Proceeds to fetch data False: Loops back to status check 6. ๐ฅ Fetch Scraped Data Type:** HTTP Request (GET) Purpose:** Retrieves the final scraped data Endpoint:** https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format:** JSON Function:** Downloads completed review and app data 7. ๐ Save to Google Sheet Type:** Google Sheets Node Purpose:** Stores scraped data for analysis Operation:** Append rows Target:** Specified Google Sheet document Data Mapping:** URL, Review ID, Reviewer Name, Review Date Review Rating, Review Text, App Rating App Number of Reviews, App What's New, App Country 8. โ ๏ธ Check Low Ratings Type:** IF Condition Purpose:** Identifies poor-performing apps Condition:** review_rating < 4 Logic:** True: Triggers alert notification False: No action taken 9. ๐ฃ Send Alert to Telegram Type:** Telegram Node Purpose:** Sends performance alerts Message Format:** โ ๏ธ Low App Performance Alert ๐ฑ App: {app_title} ๐งโ๐ป Developer: {app_developer} โญ Rating: {app_rating} ๐ Reviews: {app_number_of_reviews} ๐ View on Play Store Workflow Flow Input Form โ Start Scraping โ Check Status โ Wait 45s โ Verify Completion โ โ โโโโโ Loop โโโโโ โ Fetch Data โ Save to Sheet & Check Ratings โ Send Telegram Alert Configuration Requirements API Keys & Credentials Bright Data API Key:** Required for web scraping Google Sheets OAuth2:** For data storage access Telegram Bot Token:** For alert notifications Setup Parameters Google Sheet ID:** Target spreadsheet identifier Telegram Chat ID:** Destination for alerts N8N Instance ID:** Workflow instance identifier Key Features Data Collection Comprehensive app metadata extraction Review content and rating analysis Developer and country information App store performance metrics Quality Monitoring Automated low-rating detection Real-time performance alerts Continuous data archiving Integration Capabilities Bright Data web scraping service Google Sheets data persistence Telegram instant notifications Polling-based status monitoring Use Cases App Performance Monitoring Track rating trends over time Identify user sentiment patterns Monitor competitor performance Quality Assurance Early warning for rating drops Customer feedback analysis Market reputation management Business Intelligence Review sentiment analysis Performance benchmarking Strategic decision support Technical Notes Polling Interval:** 45-second status checks Rating Threshold:** Alerts triggered for ratings < 4 Data Format:** JSON with structured field mapping Error Handling:** Includes error tracking in dataset requests Result Limiting:** Maximum 5 multiple results per request For any questions or support, please contact: info@incrementors.com or fill out this form https://www.incrementors.com/contact-us/
by Naveen Choudhary
This workflow automatically enriches company domain lists with comprehensive business information scraped from ZoomInfo, organizing the data in Google Sheets for sales teams and researchers. Who's it for Sales teams** building prospect databases with accurate company information Marketing professionals** researching target companies for outreach campaigns Business development teams** qualifying leads with revenue and employee data Researchers** collecting structured company data for market analysis Lead generation specialists** enriching domain lists with contact details How it works The workflow processes unprocessed domains from a Google Sheet, searches for their ZoomInfo profiles using Serper API, scrapes the company pages through Oxylabs proxy service, and extracts structured business data. Each domain is marked as processed to prevent duplicates, and the workflow includes proper rate limiting to respect API limits. What it does Loads unprocessed domains from your Google Sheets database Searches ZoomInfo using targeted queries via Serper API for each domain Validates search results and extracts relevant ZoomInfo profile URLs Scrapes company pages using Oxylabs to bypass anti-scraping protection Extracts structured data including company details, address, revenue, and employee count Updates Google Sheets with enriched company information Tracks processing status to prevent reprocessing the same domains Requirements Serper API account** with search credits (Get API key) Oxylabs subscription** for web scraping proxy service (Sign up here) Google Sheets API access** with OAuth2 authentication Google Sheets template** - Make a copy of this template sheet with pre-configured columns How to set up Make a copy of the Google Sheets template - Click here to copy the template to your Google Drive Configure API credentials in the respective HTTP Request nodes: Add Serper API key in the search node Set up Oxylabs username/password in the scraping node Set up Google Sheets authentication using OAuth2 Update the Google Sheets document ID in all Google Sheets nodes to point to your copied template Add your domain list to the sheet with 'processed' column empty or false Run the workflow using the manual trigger How to customize the workflow Search query modification**: Update the search query in the Serper node for different geographic focus (currently set for Czech Republic) Data extraction fields**: Modify the Google Sheets column mapping to include/exclude specific company data points Rate limiting**: Adjust wait times between requests to match your API rate limits Batch processing**: Configure the split batch size for processing domains in smaller groups Error handling**: Customize the continue-on-error settings based on your data quality requirements Scheduling**: Replace Manual Trigger with Schedule Trigger for automated daily/weekly runs Output data includes Complete company name and official address Phone numbers and contact information Revenue figures and employee headcount Industry classifications and business categories LinkedIn company profile URLs Geographic location details (city, state, country, postal code) Processing status tracking for workflow management Note: This workflow includes comprehensive error handling to ensure domains are always marked as processed, preventing infinite loops while maintaining data integrity. Rate limiting is built-in to respect API quotas and avoid service interruptions.
by David Roberts
This workflow allows you to ask questions about the data in a Google Sheet over a chat interface. It uses n8n's built-in chat, but could be modified to work with Slack, Teams, WhatsApp, etc. Behind the scenes, the workflow uses GPT4, so you'll need to have an OpenAI API key that supports it. How it works The workflow uses an AI agent with custom tools that call a sub-workflow. That sub-workflow reads the Google Sheet and returns information from it. Because models have a context window (and therefore a maximum number of characters they can accept), we can't pass the whole Google Sheet to GPT - at least not for big sheets. So we provide three ways of querying less data, that can be used in combination to answer questions. Those three functions are: List all the columns in the sheet Get all values of a single column Get all values of a single row Note that to use this template, you need to be on n8n version 1.19.4 or later.
by Davide
This workflow allows users to generate AI videos using Google Veo3, save them to Google Drive, generate optimized YouTube titles with GPT-4o, and automatically upload them to YouTube with Upload-Post. The entire process is triggered from a Google Sheet that acts as the central interface for input and output. IT automates video creation, uploading, and tracking, ensuring seamless integration between Google Sheets, Google Drive, Google Veo3, and YouTube. Benefits of this Workflow ๐ก No Code Interface**: Trigger and control the video production pipeline from a simple Google Sheet. โ๏ธ Full Automation**: Once set up, the entire video generation and publishing process runs hands-free. ๐ง AI-Powered Creativity**: Generates engaging YouTube titles using GPT-4o. Leverages advanced generative video AI from Google Veo3. ๐ Cloud Storage & Backup**: Stores all generated videos on Google Drive for safekeeping. ๐ YouTube Ready**: Automatically uploads to YouTube with correct metadata, saving time and boosting visibility. ๐งช Scalable**: Designed to process multiple video prompts by looping through new entries in Google Sheets. ๐ API-First**: Utilizes secure API-based communication for all services. How It Works Trigger: The workflow can be started manually ("When clicking โTest workflowโ") or scheduled ("Schedule Trigger") to run at regular intervals (e.g., every 5 minutes). Fetch Data: The "Get new video" node retrieves unfilled video requests from a Google Sheet (rows where the "VIDEO" column is empty). Video Creation: The "Set data" node formats the prompt and duration from the Google Sheet. The "Create Video" node sends a request to the Fal.run API (Google Veo3) to generate a video based on the prompt. Status Check: The "Wait 60 sec." node pauses execution for 60 seconds. The "Get status" node checks the video generation status. If the status is "COMPLETED," the workflow proceeds; otherwise, it waits again. Video Processing: The "Get Url Video" node fetches the video URL. The "Generate title" node uses OpenAI (GPT-4.1) to create an SEO-optimized YouTube title. The "Get File Video" node downloads the video file. Upload & Update: The "Upload Video" node saves the video to Google Drive. The "HTTP Request" node uploads the video to YouTube via the Upload-Post API. The "Update Youtube URL" and "Update result" nodes update the Google Sheet with the video URL and YouTube link. Set Up Steps Google Sheet Setup: Create a Google Sheet with columns: PROMPT, DURATION, VIDEO, and YOUTUBE_URL. Share the Sheet link in the "Get new video" node. API Keys: Obtain a Fal.run API key (for Veo3) and set it in the "Create Video" node (Header: Authorization: Key YOURAPIKEY). Get an Upload-Post API key (for YouTube uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). YouTube Upload Configuration: Replace YOUR_USERNAME in the "HTTP Request" node with your Upload-Post profile name. Schedule Trigger: Configure the "Schedule Trigger" node to run periodically (e.g., every 5 minutes). Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Wikus Bergh
Who is this for? This template is ideal for n8n administrators, automation engineers, and DevOps teams who want to maintain bidirectional synchronization between their n8n workflows and GitHub repositories. It helps teams keep their workflow backups up-to-date and ensures consistency between their n8n instance and version control system. What problem is this workflow solving? Managing workflow versions across n8n and GitHub can become complex when changes happen in both places. This workflow solves that by automatically synchronizing workflows bidirectionally, ensuring that the most recent version is always available in both systems without manual intervention or version conflicts. What this workflow does: Runs on a weekly schedule (every Monday) to check for synchronization needs. Fetches all workflows from your n8n instance and compares them with GitHub repository files. Identifies workflows that exist only in n8n and uploads them to GitHub as JSON backups. Identifies workflows that exist only in GitHub and creates them in your n8n instance. For workflows that exist in both places, compares timestamps and syncs the most recent version: If n8n version is newer โ Updates GitHub with the latest workflow If GitHub version is newer โ Updates n8n with the latest workflow Automatically handles file naming, encoding/decoding, and commit messages with timestamps. Setup: Connect GitHub: Configure GitHub API credentials in the GitHub nodes. Note: Use a GitHub Personal Access Token (classic) with repo permissions to read and write workflow files. Connect n8n API: Provide your n8n API credentials in the n8n nodes. Check this doc Configure GitHub Details in the Set GitHub Details node: github_account_name: Your GitHub username or organization github_repo_name: The repository name where workflows should be stored repo_workflows_path: The folder path in your repo (e.g., workflows or n8n-workflows) Adjust Schedule: Modify the Schedule Trigger if you want a different sync frequency (currently set to weekly on Mondays). Test the workflow: Run it manually first to ensure all connections and permissions are working correctly. How to customize this workflow to your needs: Change sync frequency**: Modify the Schedule Trigger to run daily, hourly, or on-demand. Add filtering**: Extend the Filter node to exclude certain workflows (e.g., test workflows, templates). Add notifications**: Insert Slack, email, or webhook notifications to report sync results. Implement conflict resolution**: Add custom logic for handling workflows with the same timestamp. Add workflow validation**: Include checks to validate workflow JSON before syncing. Branch management**: Modify to sync to different branches or create pull requests instead of direct commits. Backup retention**: Add logic to maintain multiple versions or archive old workflows. Key Features: Bidirectional sync**: Handles changes from both n8n and GitHub Timestamp-based conflict resolution**: Always keeps the most recent version Automatic file naming**: Converts workflow names to valid filenames Base64 encoding/decoding**: Properly handles JSON workflow data Comprehensive comparison**: Uses dataset comparison to identify differences Automated commits**: Includes timestamps in commit messages for traceability This automated synchronization workflow provides a robust backup and version control solution for n8n workflows, ensuring your automation assets are always safely stored and consistently available across environments.
by Jimleuk
This n8n workflow is a proof-of-concept template exploring how we might work with multimodal LLMs and their multi-image analysis capabilities. In this demo, we compare 2 screenshots of a webpage taken at different timestamps and pass both to our multimodal LLM for a visual comparison of differences. Handling multiple binary inputs (ie. images) in an AI request is supported by n8n's basic LLM node. How it works This template is intended to run as 2 parts: first to generate the base screenshots and next to run the visual regression test which captures fresh screenshots. Starting with a list of webpages captured in a Google sheet, base screenshots are captured for each using a external web scraping service called Apify.com (I prefer Apify but feel free to use whichever web scraping service available to you) These base screenshots are uploaded to Google Drive and will be referenced later when we run our testing. Phase 2 of the workflow, we'll use a scheduled trigger to fire sometime in the future which will reuse our web scraping service to generate fresh screenshots of our desired webpages. Next, re-download our base screenshots in parallel and with both old and new captures, we'll pass these to our LLM node. In the LLM node's options, we'll define 2 "user message" inputs with the type of binary (data) for our images. Finally, we'll prompt our LLM with our testing criteria and capture the regressions detected. Note, results will vary depending on which LLM you use. A final report can be generated using the LLM's output and is uploaded to Linear. Requirements Apify.com API key for web screenshotting service Google Drive and Sheets access to store list of webpages and captures Customising this workflow Have your own preferred web screenshotting service? Feel free to swap out Apify with your service of choice. If the web screenshot is too large, it may prove difficult for the LLM to spot differences with precision. Try splitting up captures into smaller images instead.
by lin@davoy.tech
The YogiAI workflow automates sending daily yoga pose reminders and related information via Line Push Messages . This automation leverages data from a Google Sheets database containing yoga pose details such as names, image URLs, and links to ensure users receive personalized and engaging content every day. Purpose Provide users with daily yoga pose suggestions tailored to their practice. Deliver visually appealing and informative content through Line's Flex Messages, including images and clickable links. Log user interactions and preferences back into Google Sheets to refine future recommendations. Key Features Automated Daily Reminders : Sends a curated list of yoga poses at a scheduled time (21:30 Bangkok time). Dynamic Content Generation : Uses AI to rewrite and format messages in a user-friendly manner, complete with emojis and clear instructions. Integration with Google Sheets : Pulls data from a predefined Google Sheet and logs interactions for continuous improvement. Customizable Messaging : Ensures JSON outputs are properly formatted for Lineโs Flex Message API, allowing for interactive and visually rich content. Data Source Google Sheets Structure The workflow relies on a Google Sheet structured as follows: PoseName : The name of the yoga pose. uri : The image URL representing the pose. url : A clickable link directing users to more information about the pose. Sample Data Layout Supine Angle https://example.com/SupineAngle-tn146.png https://example.com/pose/SupineAngle Warrior II https://example.com/WarriorII-tn146.png https://example.com/pose/WarriorII *Note : Ensure that you update the Google Sheet with your own data. Refer to this sample sheet for reference. * Scheduled Trigger The workflow is triggered daily at 21:30 (9:30 PM) Bangkok Time (Asia/Bangkok) . This ensures timely delivery of reminders to users, keeping them engaged with their yoga practice. Workflow Process Data Retrieval Node: Get PoseName Fetches yoga pose details from the specified range in the Google Sheet. Content Generation Node: WritePosesToday Utilizes Azure OpenAI to craft user-friendly text, complete with emojis and clear instructions. Node: RewritePosesToday Formats the AI-generated text specifically for Line messaging, ensuring compatibility and visual appeal. JSON Formatting Node: WriteJSONflex Generates JSON structures required for Lineโs Flex Messages, enabling carousel displays of yoga pose images and links. Node: Fix JSON Ensures all JSON outputs are correctly formatted before being sent via Line. Message Delivery Node: Line Push with Flex Bubble Sends the final message, including both text and Flex Message carousels, directly to users via Line Push Messages. Logging Interactions Nodes: YogaLog & YogaLog2 Logs each interaction back into Google Sheets to track which poses were sent and how often they appear, refining future recommendations. Setup Prerequisites Google Sheets Account : Set up a Google Sheet with the required structure and populate it with your yoga pose data. Line Developer Account : Create a Line channel to obtain necessary credentials for sending push messages. Azure OpenAI Account : Configure access to Azure OpenAI services for generating and formatting content. Intended Audience This workflow is ideal for: Yoga Instructors : Seeking to engage students with daily pose suggestions. Fitness Enthusiasts : Looking to maintain consistency in their yoga practice. Content Creators : Interested in automating personalized and visually appealing content distribution.
by Dr. Firas
Who Is This For This workflow is ideal for content creators, solo founders, marketers, and AI enthusiasts who want to automate the full process of blog content creation. It is especially useful for professionals in tech, AI, and automation who publish frequently and need SEO-ready content fast. What Problem Does This Workflow Solve Creating SEO-optimized blog content is time-consuming and requires consistency. Manually researching trending topics slows down the content pipeline. Formatting, publishing, and promoting across multiple platforms takes effort. This workflow automates the entire process from research to publication. What This Workflow Does Research: Uses Perplexity AI to gather up-to-date content ideas via form input. Content Generation: GPT-4 creates a short, SEO-optimized article (max 20 lines) with H1, H2 structure and meta-description. Publishing: Automatically posts the content to WordPress. Email Notification: Sends the article title and URL via Gmail. Slack Notification: Notifies a specified Slack channel when the article is live. Database Logging: Saves the article details to a Notion database. Setup Guide Prerequisites WordPress account with API access OpenAI API Key Perplexity API Key Slack Bot Token Notion integration (Database ID) Gmail API credentials (optional) Community Node Required: This workflow uses n8n-nodes-mcp, which only works on self-hosted instances of n8n. > To install: Go to Settings > Community Nodes > Install n8n-nodes-mcp Steps Import the workflow into your n8n instance Install the required community node (n8n-nodes-mcp) Set up API credentials for OpenAI, Perplexity, WordPress, Slack, Gmail, and Notion Customize the form trigger with your preferred prompt Run a test using a sample topic How to Customize This Workflow Modify the research prompt to match your niche or industry Adjust GPT-4 settings for tone, structure, or content length Customize Notion fields (e.g., add tags, categories, or labels) Add logic for generating or assigning featured images automatically
by Javier Hita
Follow me on LinkedIn for more! Category: Lead Generation, Data Collection, Business Intelligence Tags: lead-generation, google-maps, rapidapi, business-data, contact-extraction, google-sheets, duplicate-prevention, automation Difficulty Level: Intermediate Estimated Setup Time: 15-20 minutes Template Description Overview This powerful n8n workflow automates the extraction of comprehensive business information from Google Maps using keyword-based searches via RapidAPI's Local Business Data service. Perfect for lead generation, market research, and competitive analysis, this template intelligently gathers business data including contact details, social media profiles, and location information while preventing duplicates and optimizing API usage. Key Features ๐ Keyword-Based Google Maps Scraping**: Search for any business type in any location using natural language queries ๐ง Contact Information Extraction**: Automatically extracts emails, phone numbers, and social media profiles (LinkedIn, Instagram, Facebook, etc.) ๐ซ Smart Duplicate Prevention**: Two-level duplicate detection saves 50-80% on API costs by skipping processed searches and preventing duplicate business entries ๐ Google Sheets Integration**: Seamless data storage with automatic organization and structure ๐ Multi-Location Support**: Process multiple cities, regions, or countries in a single workflow execution โก Rate Limiting & Error Handling**: Built-in delays and error handling ensure reliable, uninterrupted execution ๐ฐ Cost Optimization**: Intelligent batching and duplicate prevention minimize API usage and costs ๐ฑ Comprehensive Data Collection**: Gather business names, addresses, ratings, reviews, websites, verification status, and more Prerequisites Required Services & Accounts RapidAPI Account with subscription to "Local Business Data" API Google Account for Google Sheets integration n8n Instance (cloud or self-hosted) Required Credentials RapidAPI HTTP Header Authentication** for Local Business Data API Google Sheets OAuth2** for data storage and retrieval Setup Instructions Step 1: RapidAPI Configuration Create RapidAPI Account Sign up at RapidAPI.com Navigate to "Local Business Data" API Subscribe to a plan (Basic plan supports 1000 requests/month) Get API Credentials Copy your X-RapidAPI-Key from the API dashboard Note the host: local-business-data.p.rapidapi.com Configure n8n Credential In n8n: Settings โ Credentials โ Create New Type: HTTP Header Auth Name: RapidAPI Local Business Data Add headers: X-RapidAPI-Key: YOUR_API_KEY X-RapidAPI-Host: local-business-data.p.rapidapi.com Step 2: Google Sheets Setup Enable Google Sheets API Go to Google Cloud Console Enable Google Sheets API for your project Create OAuth2 credentials Configure n8n Credential In n8n: Settings โ Credentials โ Create New Type: Google Sheets OAuth2 API Follow OAuth2 setup process Create Google Sheet Structure Create a new Google Sheet with these tabs: keyword_searches sheet: | select | query | lat | lon | country_iso_code | |--------|-------|-----|-----|------------------| | X | Restaurants Madrid | 40.4168 | -3.7038 | ES | | X | Hair Salons Brooklyn | 40.6782 | -73.9442 | US | | X | Coffee Shops Paris | 48.8566 | 2.3522 | FR | stores_data sheet: The workflow will automatically create columns for business data including: business_id, name, phone_number, email, website, full_address, rating, review_count, linkedin, instagram, query, lat, lon, and 25+ more fields Step 3: Workflow Configuration Import the Workflow Copy the provided JSON In n8n: Import from JSON Update Placeholder Values Replace YOUR_GOOGLE_SHEET_ID with your actual Google Sheet ID Update credential references to match your setup Configure Search Parameters (Optional) Adjust limit: 1-100 results per query (default: 100) Modify zoom: 10-18 search radius (default: 13) Change language: EN, ES, FR, etc. (default: EN) How It Works Workflow Process Load Search Criteria: Reads queries marked with "X" from keyword_searches sheet Load Existing Data: Retrieves previously processed data for duplicate detection Filter New Searches: Smart merge identifies only new query+location combinations Process Each Location: Sequential processing prevents API overload Configure Parameters: Prepares search parameters from sheet data API Request: Calls RapidAPI to extract business information Parse Data: Structures and cleans all business information Save Results: Stores new leads in stores_data sheet Rate Limiting: 10-second delay between requests Loop: Continues until all new searches are processed Duplicate Prevention Logic Search Level: Compares new queries against existing data using query+latitude combination, skipping already processed searches. Business Level: Each business receives a unique business_id to prevent duplicate entries even across different searches. Data Extracted Business Information Business name, full address, phone number Website URL, Google My Business rating and review count Business type, price level, verification status Geographic coordinates (latitude/longitude) Detailed location breakdown (street, city, state, country, zip) Contact Details Email addresses (when publicly available) Social media profiles: LinkedIn, Instagram, Facebook, Twitter, YouTube, TikTok, Pinterest Additional phone numbers Direct Google Maps and reviews links Search Metadata Original search query and parameters Extraction timestamp and geographic data API response details for tracking Use Cases Lead Generation Generate targeted prospect lists for B2B sales Build location-specific customer databases Create industry-specific contact lists Develop territory-based sales strategies Market Research Analyze competitor density in target markets Study business distribution
by Polina Medvedieva
This n8n workflow template lets you easily generate comprehensive FAQ (Frequently Asked Questions) content for multiple services (or any items or pages you need to add the FAQs to). Simply provide the Google Sheets document containing the items to scrape, and the workflow automatically creates detailed, AI-enhanced FAQ documents. How it works The workflow reads data from a Google Sheets document containing information about different services and categories (again, in your case - whatever objects you need). For each service and category, it generates a set of standard questions and answers covering setup, permissions, integrations, use cases, and pricing benefits. An AI model (OpenAI's GPT) is used to enhance or complete some of the answers, making the content more comprehensive and natural-sounding. The workflow formats the Q&A pairs, combining AI-generated content with predefined answers where applicable. It creates a text file (JSON) for each service or category, containing the formatted Q&A pairs. The generated files are saved to specific folders in Google Drive, organized by the type of integration (native, credential-only, non-native) or category. After processing each service or category, it updates the status in the original Google Sheets document to mark it as completed. Ideal for: Marketing teams: Rapidly create comprehensive FAQ documents for multiple products or services. Customer support: Generate consistent and detailed answers for common customer queries. Product managers: Easily maintain up-to-date documentation as products evolve. Content creators: Streamline the process of creating informative content about various offerings. Accounts required Google account (for Google Sheets and Google Drive) OpenAI API account (for AI-enhanced content generation) n8n.io account (for workflow execution) Set up instructions Set up the required credentials for Google Sheets, Google Drive, and OpenAI when you first open the workflow. Prepare your Google Sheets document with the service/category information. Here's an example of Google Sheet. Fill the "Define Sheets" node with your sheets Adjust the folder IDs in the "Prepare Job" node to match your Google Drive structure. Configure the OpenAI model settings in the "OpenAI Chat Model" node if needed. Test the workflow with a small subset of data before running it on your entire dataset. Adjust the questions asked in the "Create your Q&A templates" section After testing, activate your workflow for automated FAQ generation. ๐ Big, big kudos to Jim Le for his ideas, input and support when building this workflow. Your approach to AI workflows is always super helpful!
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Extract, Transform LinkedIn Data with Bright Data MCP Server & Google Gemini workflow is an automated solution that scrapes LinkedIn content via Bright Data MCP Server then transforms the response using a Gemini LLM. The final output is sent via webhook notification and also persisted on disk. This workflow is tailored for:โ Data Analysts : Who require structured LinkedIn datasets for analytics and reporting. Marketing and Sales Teams : Looking to enrich lead databases, track company updates, and identify market trends. Recruiters and Talent Acquisition Specialists : Who want to automate candidate sourcing and company research. AI Developers : Integrating real-time professional data into intelligent applications. Business Intelligence Teams : Needing current and comprehensive LinkedIn data to drive strategic decisions. What problem is this workflow solving? Gathering structured and meaningful information from the web is traditionally slow, manual, and error-prone. This workflow solves: Reliable web scraping using Bright Data MCP Server LinkedIn tools. LinkedIn person and company web scrapping with AI Agents setup with the Bright Data MCP Server tools. Data extraction and transformation with Google Gemini LLM. Persists the LinkedIn person and company info to disk. Performs a Webhook notification with the LinkedIn person and company info. What this workflow does? This n8n workflow performs the following steps: Trigger: Start manually. Input URL(s): Specify the LinkedIn person and company URL. Web Scraping (Bright Data): Use Bright Data's MCP Server, LinkedIn tools for the person and company data extract. Data Transformation & Aggregation: Uses the Google LLM for handling the data transformation. Store / Output: Save results into disk and also performs a Webhook notification. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the LinkedIn URL person and company workflow. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Different Inputs: Instead of static URLs, accept URLs dynamically via webhook or form submissions. Data Extraction: Modify the LinkedIn Data Extractor node with the suitable prompt to format the data as you wish. Outputs: Update the Webhook endpoints to send the response to Slack channels, Airtable, Notion, CRM systems, etc.
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Scrape Web Data with Bright Data and MCP Automated AI Agent workflow is built for professionals who need to automate large-scale, intelligent data extraction by utilizing the Bright Data MCP Server and Google Gemini. This solution is ideal for: Data Analysts - Who require structured, enriched datasets for analysis and reporting. Marketing Researchers - Seeking fresh market intelligence from dynamic web sources. Product Managers - Who want competitive product and feature insights from various websites. AI Developers - Aiming to feed web data into downstream machine learning models. Growth Hackers - Looking for high-quality data to fuel campaigns, research, or strategic targeting. What problem is this workflow solving? Manually scraping websites, cleaning raw HTML data, and generating useful insights from it can be slow, error-prone, and non-scalable. This workflow solves these problems by: Automating complex web data extraction through Bright Dataโs MCP Server. Reducing the human effort needed for cleaning, parsing, and analyzing unstructured web content. Allowing seamless integration into further automation processes. What this workflow does? This n8n workflow performs the following steps: Trigger: Start manually. Input URL(s): Specify the URL to perform the web scrapping. Web Scraping (Bright Data): Use Bright Dataโs MCP Server tools to accomplish the web data scrapping with markdown and html format. Store / Output: Save results into disk and also performs a Webhook notification. Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the LinkedIn URL person and company workflow. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Different Inputs: Instead of static URLs, accept URLs dynamically via webhook or form submissions. Outputs: Update the Webhook endpoints to send the response to Slack channels, Airtable, Notion, CRM systems, etc.