by Samir Saci
Tags: Scrapping, Events, European Union, Networking Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. We use AI, automation, and data to support sustainable and data-driven operations across all types of organizations. This workflow is part of our networking strategy (as a business) to track official EU events that may relate to topics we cover. > Want to stay ahead of critical EU meetings and events without checking the website every day? This n8n workflow automatically scrapes the EU’s official event portal and logs the latest entries with clean metadata including date, location, category, and link. 📬 For collaborations, feel free to connect with me on LinkedIn Who is this template for? This workflow is useful for: Policy & public affairs teams** following institutional activities Sustainability teams** watching for relevant climate-related summits NGOs and researchers** interested in event calendars Data teams** building dashboards on public event trends What does it do? This n8n workflow: 🌐 Scrapes the EU events portal for new meetings and conferences 📅 Extracts event metadata (title, date, location, type, and link) 🔁 Handles pagination across multiple pages 🚫 Checks for duplicates already stored 📊 Saves new records into a connected Google Sheet How it works Triggered daily via cron HTTP node loads the event listing HTML Extract HTML blocks for each event article Parse event name, link, type, location, and full date Concatenate and clean dates for easy tracking Store non-duplicate entries in Google Sheets The workflow uses static data to track pagination and ensure only new events are stored, making it ideal for building up a clean dataset over time. What do I need to get started? You’ll need: A Google Sheet connected to your n8n instance No code or AI tools needed — just n8n and this template Follow the Guide! Sticky notes are included directly inside the workflow to guide you step-by-step through setup and customisation. 🎥 Watch My Tutorial Notes This is ideal for analysts and consultants who want clean, structured data from the EU portal You can add filtering, email alerts, or AI classifiers later This workflow was built using n8n version 1.93.0 Submitted: June 1, 2025
by Naveen Choudhary
Who is this for? Marketing agencies, sales teams, lead generation specialists, and business development professionals who need to build comprehensive business databases with contact information for outreach campaigns across any industry. What problem is this workflow solving? Finding businesses and their contact details manually is time-consuming and inefficient. This workflow automates the entire process of discovering businesses through Google Maps and extracting their digital contact information from websites, saving hours of manual research. What this workflow does This automated workflow runs every 30 minutes to: Scrape business data from Google Maps using Apify's Google Places crawler Save basic business information (name, address, phone, website) to Google Sheets Filter businesses that have websites Scrape each business's website content using Firecrawl Extract contact information including emails, LinkedIn, Facebook, Instagram, and Twitter profiles Store all extracted data in organized Google Sheets for easy access and follow-up Setup Required Services: Google Sheets account with OAuth2 setup Apify account with API access for Google Places scraping Firecrawl account with API access for website scraping Pre-setup: Copy this Google Sheet Configure your Apify and Firecrawl API credentials in n8n Set up Google Sheets OAuth2 connection Update the Google Sheet ID in all Google Sheets nodes Quick Start: The workflow includes detailed sticky notes explaining each phase. Simply configure your API credentials and Google Sheet, then activate the workflow. How to customize this workflow to your needs Change search criteria**: Modify the Apify scraping parameters to target different business types (restaurants, gyms, salons, etc.) or locations Adjust schedule**: Change the trigger interval from 30 minutes to your preferred frequency Add more contact fields**: Extend the extraction code to find additional contact information like WhatsApp or Telegram Filter criteria**: Modify the filter conditions to target businesses with specific characteristics Batch size**: Adjust the batch processing to handle more or fewer websites simultaneously Perfect for lead generation, competitor research, and building targeted marketing lists across any industry or business type.
by Ranjan Dailata
Who this is for? The LinkedIn Profile Extract and JSON Resume Builder is a powerful workflow that scrapes professional profile data from LinkedIn using Bright Data's infrastructure, then transforms that data into a clean, structured JSON resume using Google Gemini. The workflow is ideal for automating resume parsing, candidate profiling, or integrating into recruiting platforms. This workflow is tailored for: HR professionals & recruiters automating resume screening Talent acquisition platforms enriching candidate profiles Developers & AI builders creating resume-parsing AI pipelines Data scientists working on labor market analytics Growth hackers profiling prospects via public data What problem is this workflow solving? Parsing resumes or LinkedIn profiles into machine-readable formats is often a manual, error-prone process. Most scraping tools either fail due to anti-bot protections or return unstructured HTML that's hard to work with. This workflow solves that by: Using Bright Data's Web Unlocker for reliable, CAPTCHA-free LinkedIn scraping Extracting clean text and structured profile data via Google Gemini LLM Automatically generating a standards-compliant JSON Resume and Skills Sending the resume to webhooks or storing it for downstream usage What this workflow does Accepts LinkedIn Profile URL and required metadata (Bright Data zone, webhook) Scrapes LinkedIn profile using Bright Data Web Unlocker Extracts clean content and skills using Google Gemini LLM Builds a JSON-formatted resume following the JSON resume schema Sends the JSON resume via Webhook Notification Persists the output by saving the file to disk Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone node with the LinkedIn profile, Bright Data Zone and the Webhook notification URL. For testing purposes, you can obtain a webhook url using https://webhook.site/ How to customize this workflow to your needs Add Language Translation Insert a translation LLM node to support multilingual profiles. Generate PDF Resumes Convert JSON to formatted PDF resumes using an HTML-to-PDF module. Push to ATS or CRM Add integration nodes to pipe data into applicant tracking systems (ATS), CRMs, or databases. Use Alternative LLMs Swap Gemini with OpenAI or Anthropic Claude if preferred.
by Roman Rozenberger
How it works • Extract AI Overviews from Google Search - Receives data from browser extension via webhook • Convert HTML to Markdown - Automatically processes and cleans AI Overview content • Store in Google Sheets - Archives all extracted AI Overviews with metadata and sources • Generate SEO Guidelines - AI analyzes page content vs AI Overview to suggest improvements • Automate Analysis - Batch process multiple URLs and schedule regular checks Set up steps • Import workflow - Load the JSON template into your n8n instance (2 minutes) • Configure Google Sheets - Set up OAuth connection and create spreadsheet with required columns (5 minutes) • Set up AI provider - Add OpenRouter API credentials for Gemini 2.5 Pro (3 minutes) • Install browser extension - Deploy the companion Chrome/Firefox extension for data extraction (5 minutes) • Test webhook endpoint - Verify the connection between extension and n8n workflow (2 minutes) Total setup time: ~15 minutes What you'll need: Google account for Sheets integration Google Sheet template with required columns OpenRouter API key for Gemini 2.5 Pro model access Browser extension: Chrome Extension or Firefox Add-on n8n instance (local or cloud) Use cases: SEO agencies** - Monitor AI Overview presence for client keywords Content marketers** - Analyze what content gets featured in AI Overviews E-commerce** - Track AI Overview coverage for product-related searches Research** - Build datasets of AI Overview content across different topics The workflow comes with a free browser extension (Chrome | Firefox) that automatically extracts AI Overview content from Google Search and sends it via webhook to your n8n workflow for processing and analysis. GitHub Repository: https://github.com/romek-rozen/ai-overview-extractor/ Detailed Setup Instructions - AI Overview Extractor Prerequisites n8n instance** (local or cloud) - version 1.95.3+ Google account** for Sheets integration OpenRouter API account** for Gemini 2.5 Pro access Browser** (Chrome/Firefox) for the extension Step 1: Import the Workflow Open n8n and navigate to Workflows Click "Add workflow" → "Import from JSON" Upload the AI_OVERVIES_EXTRACTOR_TEMPLATE.json file Save the workflow Step 2: Configure Google Sheets Create Google Sheets Document Create new Google Sheet with these columns: extractedAt | searchQuery | sources | markdown | myURL | task | guidelines | key Here is public google sheet template: https://docs.google.com/spreadsheets/d/15xqZ2dTiLMoyICYnnnRV-HPvXfdgVeXowr8a7kU4uHk/edit?gid=0#gid=0 Copy the Google Sheets URL (you'll need it for the workflow) Set up Google Sheets Credentials In n8n, go to Settings → Credentials Click "Add credential" → "Google Sheets OAuth2 API" Follow the OAuth setup to authorize n8n access to Google Sheets Name the credential (e.g., "Google Sheets AI Overview") Configure Google Sheets Nodes Update these nodes with your Google Sheets URL: Get URLs to Analyze Save AI Overview to Sheets Save SEO Guidelines to Sheets In each node: Set documentId to your Google Sheets URL Set sheetName to your Google Sheets URL Select your Google Sheets credential Step 3: Configure AI Provider (OpenRouter) Get OpenRouter API Key Sign up at https://openrouter.ai/ Generate API key in your account settings Add credits to your account Set up OpenRouter Credentials In n8n, go to Settings → Credentials Click "Add credential" → "OpenRouter API" Enter your API key Name the credential (e.g., "OpenRouter AI Overview") Configure OpenRouter Node Select the Gemini 2.5 Pro Model node Choose your credential from the dropdown Verify the model (default: google/gemini-2.5-pro-preview) Step 4: Install Browser Extension Install in Chrome Official Extension (Recommended) Visit: https://chromewebstore.google.com/detail/ai-overview-extractor/cbkdfibgmhicgnmmdanlhnebbgonhjje Click "Add to Chrome" Install in Firefox Official Add-on Visit: https://addons.mozilla.org/en-US/firefox/addon/ai-overview-extractor/ Click "Add to Firefox" Step 5: Configure Webhook Connection Get Webhook URL In n8n workflow, click on the Webhook node Copy the webhook URL (should be like: http://localhost:5678/webhook/ai-overview-extractor-template-123456789) Configure Extension Go to Google Search and perform any search with AI Overview Click the browser extension button (AI Overview Extractor) In webhook configuration section, paste your webhook URL Click "Test" - should show ✅ Test successful Click "Save" to store the configuration Step 6: Activate and Test Activate Workflow In n8n, toggle the workflow to "Active" (top right switch) Verify all nodes are properly configured Test End-to-End Go to Google Search Search for something that shows AI Overview Use the extension to extract AI Overview Send via webhook - check your Google Sheets for the data Verify the markdown conversion worked correctly Optional: Batch Analysis Setup For SEO Analysis Features In your Google Sheets, add URLs in the myURL column Set task column to "create guidelines" Run the workflow manually or wait for the 15-minute scheduler Check guidelines column for AI-generated SEO recommendations Troubleshooting Webhook Issues Ensure n8n is running on port 5678 Check if workflow is activated Verify webhook URL format Google Sheets Errors Confirm OAuth credentials are working Check sheet URL format Verify column names match exactly Ensure nodes Get URLs to Analyze, Save AI Overview to Sheets, and Save SEO Guidelines to Sheets are properly configured OpenRouter Issues Check API key validity Ensure sufficient account credits Try different models if Gemini 2.5 Pro fails Verify the Gemini 2.5 Pro Model node is properly connected Extension Problems Check browser console for errors Verify extension is properly installed Ensure you're on google.com/search pages Confirm webhook URL is correctly configured in extension Next Steps Customize AI prompts** in the Generate SEO Recommendations node for your specific needs Adjust scheduler frequency** (default: 15 minutes) Add more URL analysis** by populating Google Sheets Monitor usage** and API costs Support GitHub Issues**: https://github.com/romek-rozen/ai-overview-extractor/issues n8n Community**: https://community.n8n.io/ Template Documentation**: Check the included README files
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for This workflow automates the real-time extraction of Job Descriptions and Salary Information from job listing pages using Bright Data MCP and analyzes content using OpenAI GPT-4o mini. This workflow is ideal for: Recruiters & HR Tech Startups**: Automate job data collection from public listings Market Intelligence Teams**: Analyze compensation trends across companies or geographies Job Boards & Aggregators**: Power search results with structured, enriched listings AI Workflow Builders**: Extend to other career platforms or automate resume-job match analysis Analysts & Researchers**: Track hiring signals and salary benchmarks in real time What problem is this workflow solving? Traditional scraping of job portals can be challenging due to cluttered content, anti-scraping measures, and inconsistent formatting. Manually analyzing salary ranges and job descriptions is tedious and error-prone. This workflow solves the problem by: Simulating user behavior using Bright Data MCP Client to bypass anti-scraping systems Extracting structured, clean job data in Markdown format Using OpenAI GPT-4o mini to analyze and extract precise salary details and refined job descriptions Merging and formatting the result for easy consumption Delivering final output via webhook, Google Sheets, or file system What this workflow does Components & Flow Input Nodes job_search_url: The job listing or search result URL job_role: The title or role being searched for (used in logging/formatting) MCP Client Operations MCP Salary Data Extractor Simulates browser behavior and scrapes salary-related content (if available) MCP Job Description Extractor Extracts full job description as structured Markdown content OpenAI GPT-4o mini Nodes Salary Information Extractor Uses GPT-4o mini to detect, clean, and standardize salary range data (if any) Job Description Refiner Extracts role responsibilities, qualifications, and benefits from unstructured text Company Information Extractor Uses Bright Data MCP and GPT-4o mini to extract the company information Merge Node Combines the refined job description and extracted salary information into a unified JSON response object Aggregate node Aggregates the job description and salary information into a single JSON response object Final Output Handling The output is handled in three different formats depending on your downstream needs: Save to Disk** Output stored with filename including timestamp and job role Google Sheet Update** Adds a new row with job role, salary, summary, and link Webhook Notification** Pushes merged response to an external system Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Modify Input Source Change the job_search_url to point to any job board or aggregator Customize job_role to reflect the type of jobs being analyzed Tweak LLM Prompts (Optional) Refine GPT-4o mini prompts to extract additional fields like benefits, tech stacks, remote eligibility Change Output Format Customize the merged object to output JSON, CSV, or Markdown based on downstream needs Add additional destinations (e.g., Slack, Airtable, Notion) via n8n nodes
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The DNB Company Search & Extract workflow is designed for professionals who need to gather structured business intelligence from Dun & Bradstreet (DNB). It is ideal for: Market Researchers B2B Sales & Lead Generation Experts Business Analysts Investment Analysts AI Developers Building Financial Knowledge Graphs What problem is this workflow solving? Gathering business information from the DNB website usually involves manual browsing, copying company details, and organizing them in spreadsheets. This workflow automates the entire data collection pipeline — from searching DNB via Google, scraping relevant pages, to structuring the data and saving it in usable formats. What this workflow does This workflow performs automated search, scraping, and structured extraction of DNB company profiles using Bright Data’s MCP search agents and OpenAI’s 4o mini model. Here's what it includes: Set Input Fields: Provide search_query and webhook_notification_url. Bright Data MCP Client (Search): Performs Google search for the DNB company URL. Markdown Scrape from DNB: Scrapes the company page using Bright Data and returns it as markdown. OpenAI LLM Extraction: Transforms markdown into clean structured data. Extracts business information (company name, size, address, industry, etc.) Webhook Notification: Sends structured response to your provided webhook. Save to Disk: Persists the structured data locally for logging or auditing. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the Set input fields for search_query and webhook_notification_url. Update the file name and path to persist on disk. How to customize this workflow to your needs Search Engine**: Default is Google, but you can change the MCP client engine to Bing, or Yandex if needed. Company Scope**: Modify search query logic for niche filtering, e.g., "biotech startups site:dnb.com". Structured Fields**: Customize the LLM prompt to extract additional fields like CEO name, revenue, or ratings. Integrations**: Push output to Notion, Airtable, or CRMs like HubSpot using additional n8n nodes. Formatting**: Convert output to PDF or CSV using built-in File and Spreadsheet nodes.
by KPendic
How it works This workflow simply exports all your CloudFlare domains to Google Sheet to get high overview of all of your settings. This could help for easy debugging, searching or similar needs. In flow simple pagging nodes are used to iterate over all your domains, because this list could be huge. For each host we are merging DNS & Settings and transforming them into columns for all our domains. Requirements For storing and processing of data in this flow you will need: CloudFlare.com API key/token - for retrieving your data (https://dash.cloudflare.com/:account/api-tokens) (need full access) Google Spreadsheet auth connected in your n8n Credentials Google Spreadsheet template - you can copy my sheet as starting point, start by copying it to your account Match Sheet ID in 'Export' node to your newly created. Official CloudFlare api Documentation For full details and specifications please use API documentation from: https://developers.cloudflare.com/api/ Potential API timeouts If you encounter CF API timeouts - I would suggest to only put somewhere in the loop simple sleep/wait node - for couple of seconds - and it should resolve timeouts. Google Sheet I've used simple Google Sheet feature conditional formatting to visually distinct my on|off toggles that was of my interest to easily get high overview for debuggint some of the settings on my hosts - but please use your own logic or change it completely.
by Incrementors
🛒 Google Maps Business Phone Number Scraper Using Bright Data API & Google Sheets Integration This template requires a self-hosted n8n instance to run. An automated workflow that extracts business information including phone numbers from Google Maps using Bright Data's API and saves the data to Google Sheets for easy access and analysis. 📋 Overview This workflow provides an automated solution for extracting business contact information from Google Maps based on location and keyword searches. Perfect for lead generation, market research, competitor analysis, and business directory creation. ✨ Key Features 🎯 Form-Based Input: Easy-to-use form for location and keyword submission 🗺️ Google Maps Integration: Uses Bright Data's Google Maps dataset for accurate business data 📊 Comprehensive Data Extraction: Extracts business names, addresses, phone numbers, ratings, and more 📧 Automated Processing: Handles the entire scraping process automatically 📈 Google Sheets Storage: Automatically saves extracted data to organized spreadsheets 🔄 Smart Status Checking: Monitors scraping progress with automatic retry logic ⚡ Fast & Reliable: Professional scraping with built-in error handling 🎯 Customizable Output: Configurable data fields for specific business needs 🎯 What This Workflow Does Input Location:** Geographic area to search (city, state, country) Keywords:** Business type or industry keywords Processing Form Submission: User submits location and keywords through web form API Request: Sends scraping request to Bright Data's Google Maps dataset Status Monitoring: Continuously checks scraping progress Data Retrieval: Fetches completed business data when ready Data Storage: Saves extracted information to Google Sheets Error Handling: Implements retry logic for failed requests Output Data Points | Field | Description | Example | |-------|-------------|---------| | Business Name | Official business name from Google Maps | "Joe's Pizza Restaurant" | | Phone Number | Contact phone number | "+1-555-123-4567" | | Address | Complete business address | "123 Main St, New York, NY 10001" | | Rating | Google Maps rating score | 4.5 | | URL | Google Maps listing URL | "https://maps.google.com/..." | 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access Bright Data account with Google Maps dataset access 5-10 minutes for setup Step 1: Import the Workflow Copy the JSON workflow code from the provided file In n8n: Workflows → + Add workflow → Import from JSON Paste JSON and click Import Step 2: Configure Bright Data Set up Bright Data credentials: In n8n: Credentials → + Add credential → HTTP Request Auth Enter your Bright Data API key Test the connection Configure dataset: Ensure you have access to Google Maps dataset (gd_m8ebnr0q2qlklc02fz) Verify dataset permissions in Bright Data dashboard Step 3: Configure Google Sheets Integration Create a Google Sheet: Go to Google Sheets Create a new spreadsheet named "Business Data" or similar Copy the Sheet ID from URL: https://docs.google.com/spreadsheets/d/SHEET_ID_HERE/edit Set up Google Sheets credentials: In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Prepare your data sheet with columns: Column A: Name Column B: Address Column C: Rating Column D: Phone Number Column E: URL Step 4: Update Workflow Settings Update Google Sheets node: Open "Save to Google Sheets" node Replace the document ID with your Sheet ID Select your Google Sheets credential Choose the correct sheet/tab name Update Bright Data nodes: Open HTTP Request nodes Replace BRIGHT_DATA_API_KEY with your actual API key Verify dataset ID matches your subscription Step 5: Test & Activate Test the workflow: Activate workflow (toggle switch) Submit test form with location: "New York" and keywords: "restaurants" Verify data appears in Google Sheet Check for proper phone number extraction 📖 Usage Guide Submitting Search Requests Access the form URL provided by n8n Enter the desired location (city, state, or country) Enter relevant keywords (business type, industry, etc.) Submit the form and wait for processing Understanding the Results Your Google Sheet will populate with business data including: Complete business contact information Verified phone numbers from Google Maps Accurate addresses and ratings Direct links to Google Maps listings 🔧 Customization Options Adding More Data Points Edit the "Bright Data API - Request Business Data" node to capture additional fields: Business descriptions Operating hours Reviews count Website URLs Photos and videos Modifying Search Parameters Customize the search behavior: Adjust "limit_per_input" for more or fewer results Modify search type and discovery method Add geographical coordinates for precise targeting 🚨 Troubleshooting Common Issues & Solutions 1. "Bright Data connection failed" Cause:** Invalid API credentials or dataset access Solution:** Verify credentials in Bright Data dashboard, check dataset permissions 2. "No business data extracted" Cause:** Invalid search parameters or no results found Solution:** Try broader keywords or different locations, verify dataset availability 3. "Google Sheets permission denied" Cause:** Incorrect credentials or sheet permissions Solution:** Re-authenticate Google Sheets, check sheet sharing settings 4. "Workflow execution timeout" Cause:** Large search results or slow API response Solution:** Reduce search scope, increase timeout settings, check internet connection 📊 Use Cases & Examples 1. Lead Generation Goal:** Find potential customers in specific areas Search for businesses by industry and location Extract contact information for outreach campaigns Build targeted prospect lists 2. Market Research Goal:** Analyze local business landscape Study competitor density in target markets Identify market gaps and opportunities Gather business intelligence for strategic planning 3. Directory Creation Goal:** Build comprehensive business directories Create industry-specific business listings Maintain updated contact databases Support local business communities 📈 Performance & Limits Expected Performance Processing time:** 1-5 minutes per search depending on results Data accuracy:** 95%+ for active Google Maps listings Success rate:** 90%+ for accessible businesses Concurrent requests:** Depends on Bright Data plan limits Resource Usage Memory:** ~50MB per execution Storage:** Minimal (data stored in Google Sheets) API calls:** 2-3 Bright Data calls + 1 Google Sheets call per search Bandwidth:** ~1-2MB per search request Execution time:** 2-5 minutes for typical searches Scaling Considerations Rate limiting:** Respect Bright Data API limits Error handling:** Implement retry logic for failed requests Data validation:** Add checks for incomplete business data Cost optimization:** Monitor API usage to control expenses Batch processing:** Group multiple searches for efficiency 🤝 Support & Community Getting Help n8n Community Forum:** community.n8n.io Documentation:** docs.n8n.io Bright Data Support:** Contact through your dashboard GitHub Issues:** Report bugs and feature requests Contributing Share improvements with the community Report issues and suggest enhancements Create variations for specific use cases Document best practices and lessons learned 🎯 Ready to Use! This workflow provides a solid foundation for automated Google Maps business data extraction. Customize it to fit your specific needs and use cases. Your workflow URL: https://your-n8n-instance.com/workflow/google-maps-scraper For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Jimleuk
This n8n workflow demonstrates creating a recipe recommendation chatbot using the Qdrant vector store recommendation API. Use this example to build recommendation features in your AI Agents for your users. How it works For our recipes, we'll use HelloFresh's weekly course and recipes for data. We'll scrape the website for this data. Each recipe is split, vectorised and inserted into a Qdrant Collection using Mistral Embeddings Additionally the whole recipe is stored in a SQLite database for later retrieval. Our AI Agent is setup to recommend recipes from our Qdrant vector store. However, instead of the default similarity search, we'll use the Recommendation API instead. Qdrant's Recommendation API allows you to provide a negative prompt; in our case, the user can specify recipes or ingredients to avoid. The AI Agent is now able to suggest a recipe recommendation better suited for the user and increase customer satisfaction. Requirements Qdrant vector store instance to save the recipes Mistral.ai account for embeddings and LLM agent Customising the workflow This workflow can work for a variety of different audiences. Try different sets of data such as clothes, sports shoes, vehicles or even holidays.
by Sira Ekabut
Facebook access tokens expire quickly, requiring regular updates for continued API access. This workflow simplifies the process of exchanging short-lived tokens for long-lived ones, saving time and reducing manual effort. What this workflow does Exchanges a short-lived Facebook User Access Token for a long-lived token using the Facebook Graph API. Optionally retrieves a long-lived Page Access Token associated with the user. Outputs both the user and page tokens for further use in automation or integrations. Setup Prerequisites: A valid Facebook App ID and App Secret. A short-lived User Access Token from the Facebook platform. (Optional) The App-Scoped User ID for fetching associated page tokens. Workflow Configuration: Replace placeholder values in the "Set Parameter" node with your Facebook credentials and token. Run the workflow manually to generate long-lived tokens. Documentation Reference: Follow the official Facebook guide for more details: https://developers.facebook.com/docs/facebook-login/guides/access-tokens/get-long-lived/
by David Ashby
Complete MCP server exposing all Oura Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Oura Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Oura Tool tool with full error handling 📋 Available Operations (4 total) Every possible Oura Tool operation is included: 🔧 Profile (1 operations) • Get a profile 🔧 Summary (3 operations) • Get activity summary • Get readiness summary • Get sleep summary 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Oura Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Oura Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Octoleo
Overview This workflow automates the backup of all workflows from your system to a Git repository hosted on Gitea. It runs on a scheduled trigger, fetching, encoding, and committing workflow data, ensuring seamless version control and disaster recovery. 📌 Quick Setup: Just update three global variables and configure authentication—no manual exports needed! How It Works (Quick Glance) 1️⃣ Scheduled Execution → Runs automatically at defined intervals. 2️⃣ Fetch Workflows → Uses the API to retrieve all workflows. 3️⃣ Process Workflows → Converts workflow data into a Git-friendly format. 4️⃣ Commit & Push to Git → Saves workflows in a Gitea repository. Setup Steps (⚡ Takes ~5 min) 1️⃣ Set Global Variables Go to the Globals section in the workflow and update: repo.url* → https://your-gitea-instance.com *(Replace with your actual Gitea URL) repo.name* → workflows *(Repository name where backups will be stored) repo.owner* → octoleo *(Gitea account that owns the repository) 📌 These three variables define where the workflows are stored. 2️⃣ Configure Gitea Authentication Go to your Gitea account* → Generate a *Personal Access Token** In the credential manager, create a new Gitea Token with: Name:** Authorization Value:** Bearer YOUR_PERSONAL_ACCESS_TOKEN 📌 Ensure there is a space after Bearer before the token! 3️⃣ Link Credentials to Git Nodes Attach the Gitea credentials to these three Git nodes: GetGitea** → Retrieves existing repository data PutGitea** → Updates workflows PostGitea** → Adds new workflows 4️⃣ Link Credentials for API Requests Add API authentication** in the node that fetches all workflows. 5️⃣ Test & Activate Run the workflow manually** to confirm backups work. Enable the schedule trigger for automation. 📌 The workflow automatically checks for changes before committing updates. Why Use This Workflow? ✅ Automated Backups → No manual exports needed. ✅ Version Control → Easily track workflow changes. ✅ Simple Setup → Just configure globals & credentials. ✅ Secure → Uses token-based authentication. Next Steps 💬 Have questions? Reach out on the forum! 🚀