by Ranjan Dailata
Who this is for? Indeed Data Scraper & Summarization with Airtable, Bright Data and Google Gemini is an automated workflow that extracts company profile information from Indeed using Bright Data Web Unlocker, transforms the data using Google Gemini's LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams who want quick summaries of companies listed on Indeed. Market researchers and analysts needing structured insights into businesses. Founders, investors, and consultants scouting potential competitors, partners, or clients. No-code enthusiasts looking to automate data extraction and enrichment pipelines without manual scraping or parsing. What problem is this workflow solving? Manually gathering structured information about companies on Indeed is time-consuming and inconsistent. Pages vary in structure, and extracting clean, digestible summaries can require technical scraping expertise. This workflow automates: Extracting company data from Indeed reliably using Bright Data Web Unlocker. Cleaning and summarizing the extracted content using Google Gemini LLM. Storing structured insights directly into Airtable for easy access and further workflows. Eliminates manual research, saves hours, and produces AI-enhanced, easily searchable records. What this workflow does Triggers on-demand. Pulls company page URLs from Airtable. Scrapes content from each Indeed company profile using Bright Data Web Unlocker. Sends the raw HTML to Google Gemini for extraction and summarization. Sends the summarized data to other platforms via a Webhook notification mechanism. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials for Bright Data. The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the Airtable Personal Access Token account under Credentials. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Extend the scraper**: Modify Bright Data targets to pull job listings, salaries, or employee reviews via the Airtable data source. Customize the summary prompt**: Ask Gemini to extract different attributes hiring trends, practices etc. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by InfyOm Technologies
✅ What problem does this workflow solve? Tired of the back-and-forth involved in scheduling meetings? This workflow lets you offer a seamless, voice-based appointment booking experience. It automatically checks your Cal.com availability and either books a meeting or helps the caller choose another time—powered by ElevenLabs for a human-like voice interaction. ⚙️ What does this workflow do? Receives an inbound voice call (e.g., from a website or IVR system). Uses ElevenLabs to drive the voice interaction with natural, AI-generated speech. Checks real-time availability from your Cal.com calendar. Books a meeting if a slot is available. If not, asks the user to suggest a new time and checks availability again. Confirms the appointment with a verbal response and optional email or SMS. 🔧 Setup 🧠 ElevenLabs Custom Tools Setup Add two tools in Custom Tools in ElevenLabs with the following details. Name: bookSlot Description: Use this tool when the user confirms their slot and appointment. When you have the proper name and email of theirs. POST: {n8n_webhook_url} Name: checkAvailableSlot Description: Use this slot to check Available slots in our calendar. POST: {n8n_webhook_url} 🗣 ElevenLabs Prompt Configuration First Message: Thanks for calling InfyOm Technologies. How can I help you? Use the following System Prompt: 1. Personality You are a clear, helpful, and respectful assistant focused solely on booking appointments for clients. Identity**: A virtual appointment scheduler. Core Traits**: Polite, efficient, conversational, respectful. Role**: Guide users through choosing a time, checking availability, and finalizing the booking. 2. Tone Your tone is polite, professional, and engaging—friendly enough to feel human, but focused enough to stay on task. Use conversational cues like “Okay,” “Got it,” “Sounds good,” etc. Maintain a warm but efficient pace. Speak clearly and respectfully at all times. Ensure the conversation is on both topics. 3. Goal Your task is to book an appointment for the client. Step-by-step Conversation Flow Greeting & Purpose Greet politely and state the purpose. Example: “Hi! I’m here to help you book an appointment.” Request Preferred Time Ask: “Can you please tell me your preferred time slot for the appointment?” Understand the user's date, and if it is not clear, then ask for the full date with month and year. Check Availability Use the checkAvailableSlot tool with the user’s preferred time. If the slot is available: Confirm with the user: “That slot is available. Should I go ahead and book it for you?” If the user agrees, → Use bookSlot. If the slot is not available: Say: “It looks like that time isn’t available... Would you like to look for the same time on the next day?” Handle Next-Day Option If the user agrees, check availability for the same time on the next day using checkAvailableSlot. If available, → confirm and use bookSlot. If not, → politely inform and ask for a different time. Close the Call Confirm the booking if done. Example: “Great! Your appointment is booked. Thank you and have a wonderful day!” 4. Guardrails Do not** discuss anything unrelated to appointment booking. If the user asks for something outside your scope: Say: “I’m only here to help with appointment bookings. For other questions, please contact our support team.” Never speculate about unavailable data or functions. Never ask for a date format from the User, like Say date in Day Month and Year format. If you can't understand the user's date, then say Please speak the full date. 5. Tools You can use the following tools to help book appointments: checkAvailableSlot: Use this to check if the user’s requested time is open. bookSlot: Use this to confirm the appointment after the user agrees. End call Always says Thanks for reaching out to us. Have a nice day. 📅 Cal.com API Setup Go to cal.com and generate an API Key. Create new Cal API credentials in n8n. Set this API Key in the credentials. 🧠 How it Works ☎️ 1. Incoming Call An inbound call is received by the system, and the user begins the conversation with your voice AI agent. 🧏 2. Voice Interaction via ElevenLabs The caller is greeted and asked for their preferred appointment time. All conversations are powered by natural AI speech from ElevenLabs. 🗓 3. Availability Check (Cal.com) The requested time is validated against your Cal.com availability: ✅ If available, the appointment is booked instantly. ❌ If unavailable, the agent informs the caller and asks for another time. 🔁 4. Retry Loop (If Slot Unavailable) If the initial slot is unavailable: The agent asks for a new preferred time. The process repeats until a valid slot is found or a fallback message is delivered. ✅ 5. Appointment Confirmation Once booked, the AI confirms the appointment verbally. You may also configure it to send: 📧 Email confirmations 📱 SMS reminders (optional) 👤 Who can use it? This is perfect for: 🧑⚕️ Clinics or Therapists 🧑💼 Consultants & Coaches 🏢 Sales Teams 🧠 AI-first SaaS Tools If your business relies on scheduled meetings and you'd like to automate bookings using a conversational voice experience, this is your go-to no-code solution.
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Extract, Transform LinkedIn Data with Bright Data MCP Server & Google Gemini workflow is an automated solution that scrapes LinkedIn content via Bright Data MCP Server then transforms the response using a Gemini LLM. The final output is sent via webhook notification and also persisted on disk. This workflow is tailored for: Data Analysts : Who require structured LinkedIn datasets for analytics and reporting. Marketing and Sales Teams : Looking to enrich lead databases, track company updates, and identify market trends. Recruiters and Talent Acquisition Specialists : Who want to automate candidate sourcing and company research. AI Developers : Integrating real-time professional data into intelligent applications. Business Intelligence Teams : Needing current and comprehensive LinkedIn data to drive strategic decisions. What problem is this workflow solving? Gathering structured and meaningful information from the web is traditionally slow, manual, and error-prone. This workflow solves: Reliable web scraping using Bright Data MCP Server LinkedIn tools. LinkedIn person and company web scrapping with AI Agents setup with the Bright Data MCP Server tools. Data extraction and transformation with Google Gemini LLM. Persists the LinkedIn person and company info to disk. Performs a Webhook notification with the LinkedIn person and company info. What this workflow does? This n8n workflow performs the following steps: Trigger: Start manually. Input URL(s): Specify the LinkedIn person and company URL. Web Scraping (Bright Data): Use Bright Data's MCP Server, LinkedIn tools for the person and company data extract. Data Transformation & Aggregation: Uses the Google LLM for handling the data transformation. Store / Output: Save results into disk and also performs a Webhook notification. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the LinkedIn URL person and company workflow. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Different Inputs: Instead of static URLs, accept URLs dynamically via webhook or form submissions. Data Extraction: Modify the LinkedIn Data Extractor node with the suitable prompt to format the data as you wish. Outputs: Update the Webhook endpoints to send the response to Slack channels, Airtable, Notion, CRM systems, etc.
by Akash Kankariya
🚀 Discover trending and viral YouTube videos easily with this powerful n8n automation! This workflow helps you perform bulk research on YouTube videos related to any search term, analyzing engagement data like views, likes, comments, and channel statistics — all in one streamlined process. ✨ Perfect for: Content creators wanting to find viral video ideas Marketers analyzing competitor content YouTubers optimizing their content strategy How It Works 🎯 1️⃣ Input Your Search Term — Simply enter any keyword or topic you want to research. 2️⃣ Select Video Format — Choose between short, medium, or long videos. 3️⃣ Choose Number of Videos — Define how many videos to analyze in bulk. 4️⃣ Automatic Data Fetch — The workflow grabs video IDs, then fetches detailed video data and channel statistics from the YouTube API. 5️⃣ Performance Scoring — Videos are scored based on engagement rates with easy-to-understand labels like 🚀 HOLY HELL (viral) or 💀 Dead. 6️⃣ Export to Google Sheets — All data, including thumbnails and video URLs, is appended to your Google Sheet for comprehensive review and easy sharing. Setup Instructions 🛠️ Google API Key Get your YouTube Data API key from Google Developers Console. Add it securely in the n8n credentials manager (do not hardcode). Google Sheets Setup Create a Google Sheet to store your results (template link is provided). Share the sheet with your Google account used in n8n. Update the workflow with your sheet's Document ID and Sheet Name if needed. Run the Workflow Trigger the form webhook via browser or POST call. Enter search term, format, and number of videos. Let it process and check your Google Sheet for insights! Features ✨ Bulk fetches the latest and top-viewed YouTube videos. Intelligent video performance scoring with emojis for quick insights 🔥🎬. Organizes data into Google Sheets with thumbnail previews 🖼️. Easy to customize search parameters via an intuitive form. Fully automated, no manual API calls needed. Get Started Today! 🌟 Boost your YouTube content strategy and stay ahead with this powerful viral video research automation! Try it now on your n8n instance and tap into the world of viral content like a pro 🎥💡
by Jean-Marie Rizkallah
🧩 Jamf Policies Export to Slack Quickly export and review your entire Jamf policy configuration—including triggers, frequencies, and scope—directly in Slack. This enables IT and security teams to audit policy setups without logging into Jamf or generating reports manually. ❗The Problem Jamf Pro lacks a straightforward way to quickly review or share a list of all configured policies, including key attributes like frequency, scope, or triggers. Security teams often need this for audit or compliance reviews, but navigating Jamf’s UI or exporting via the API is time-consuming. 🔧 This Fixes It This workflow fetches all policies, extracts the most relevant fields, compiles them into a csv file, and posts that readble file into a designated Slack channel—automatically or on demand. ✅ Prerequisites • A Jamf Pro API key (OAuth2) with read access to policies • A Slack app with permission to post files into your chosen channel 🔍 How it works • Manually trigger or use the webhook to initiate the flow • Retrieve all policies from Jamf via the XML API • Convert the XML response into JSON • Split and loop through each policy ID • Retrieve detailed data for each policy • Format relevant fields (ID, name, trigger, scope, etc.) • Convert the final data set into an .csv file • Upload the file to your Slack channel ⚙️ Set up steps • Takes ~10 minutes to configure • Set the Jamf BaseURL in the “Jamf Server” node • Configure Jamf OAuth2 credentials in the HTTP Request nodes • Adjust the fields for export in the “Set-fields” node • Set your Slack credentials and target channel in the “Post to Slack” node • Optional: Customize the exported fields or filename 🔄 Automation Ready Schedule this flow daily/weekly, or tie it to change events to keep your team informed.
by David Ashby
Complete MCP server exposing 2 CarbonDoomsDay API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add CarbonDoomsDay credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the CarbonDoomsDay API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.carbondoomsday.com/api • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Co2 (2 endpoints) • GET /co2/: Get CO2 Measurement by Date • GET /co2/{date}/: CO2 measurements from the Mauna Loa observatory. This data is made available through the good work of the people at the Mauna Loa observatory. Their release notes say: These data are made freely available to the public and the scientific community in the belief that their wide dissemination will lead to greater understanding and new scientific insights. We currently scrape the following sources: [co2_mlo_weekly.csv] [co2_mlo_surface-insitu_1_ccgg_DailyData.txt] [weekly_mlo.csv] We have daily CO2 measurements as far back as 1958. Learn about using pagination via [the 3rd party documentation]. [co2_mlo_weekly.csv]: https://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_mlo_weekly.csv [co2_mlo_surface-insitu_1_ccgg_DailyData.txt]: ftp://aftp.cmdl.noaa.gov/data/trace_gases/co2/in-situ/surface/mlo/co2_mlo_surface-insitu_1_ccgg_DailyData.txt [weekly_mlo.csv]: http://scrippsco2.ucsd.edu/sites/default/files/data/in_situ_co2/weekly_mlo.csv [the 3rd party documentation]: http://www.django-rest-framework.org/api-guide/pagination/#pagenumberpagination 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native CarbonDoomsDay API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Oneclick AI Squad
Transform your meetings into actionable insights automatically! This workflow captures meeting audio, transcribes conversations, generates AI summaries, and emails the results to participants—all without manual intervention. What's the Goal? Auto-record meetings** when they start and stop when they end Transcribe audio** to text using Vexa Bot integration Generate intelligent summaries** with AI-powered analysis Email summaries** to meeting participants automatically Eliminate manual note-taking** and post-meeting admin work Never miss important discussions** or action items again Why Does It Matter? Save 90% of Post-Meeting Time**: No more manual transcription or summary writing Never Lose Key Information**: Automatic capture ensures nothing falls through cracks Improve Team Productivity**: Focus on discussions, not note-taking Perfect Meeting Records**: Searchable transcripts and summaries for future reference Instant Distribution**: Summaries reach all participants immediately after meetings How It Works Step 1: Meeting Detection & Recording Start Meeting Trigger**: Detects when meeting begins via Google Meet webhook Launch Vexa Bot**: Automatically joins meeting and starts recording End Meeting Trigger**: Detects meeting end and stops recording Step 2: Audio Processing & Transcription Stop Vexa Bot**: Ends recording and retrieves audio file Fetch Meeting Audio**: Downloads recorded audio from Vexa Bot Transcribe Audio**: Converts speech to text using AI transcription Step 3: AI Summary Generation Prepare Transcript**: Formats transcribed text for AI processing Generate Summary**: AI model creates concise meeting summary with: Key discussion points Decisions made Action items assigned Next steps identified Step 4: Distribution Send Email**: Automatically emails summary to all meeting participants Setup Requirements Google Meet Integration: Configure Google Meet webhook and API credentials Set up meeting detection triggers Test with sample meeting Vexa Bot Configuration: Add Vexa Bot API credentials for recording Configure audio file retrieval settings Set recording quality and format preferences AI Model Setup: Configure AI transcription service (e.g., OpenAI Whisper, Google Speech-to-Text) Set up AI summary generation with custom prompts Define summary format and length preferences Email Configuration: Set up SMTP credentials for email distribution Create email templates for meeting summaries Configure participant list extraction from meeting metadata Import Instructions Get Workflow JSON: Copy the workflow JSON code Open n8n Editor: Navigate to your n8n dashboard Import Workflow: Click menu (⋯) → "Import from Clipboard" → Paste JSON → Import Configure Credentials: Add API keys for Google Meet, Vexa Bot, AI services, and SMTP Test Workflow: Run a test meeting to verify end-to-end functionality Your meetings will now automatically transform into actionable summaries delivered to your inbox!
by Kanaka Kishore Kandregula
Daily Magento 2 stock check Automation It identifies SKUs with low inventory per source and sends daily alerts via: 📬 Gmail (HTML email) 💬 Slack (formatted text message) This automation empowers store owners and operations teams to stay ahead of inventory issues by proactively monitoring stock levels across all Magento 2 sources. By receiving early alerts for low-stock products, businesses can restock before items sell out—ensuring continuous product availability, reducing missed sales opportunities, and maintaining customer trust. Avoiding stockouts not only protects your brand reputation but also keeps your store competitive by preventing customers from turning to competitors due to unavailable items. Timely restocking leads to higher fulfillment rates, improved customer satisfaction, and ultimately, stronger revenue and long-term loyalty. ✅ Features: Filters out configurable, virtual, and downloadable products Uses Magento 2 MSI stock per source Customizable thresholds (default: ≤10 overall or ≤5 per source) HTML-formatted email report Slack notification with a code-formatted Runs daily via Cron (08:50 AM) No need of any 3rd part Modules One time Setup 🔑 Credentials Used HTTP Request (Magento 2 REST API using Bearer Token) Gmail (OAuth2) Slack (OAuth2 or Webhook) 📊 Tags Magento, Inventory, MSI, Stock Alert, Ecommerce, Slack, Gmail, Automation 📂 Category E-commerce → Magento 2 (Adobe Commerce) 👤 Author Kanaka Kishore Kandregula Certified Magento 2 Developer https://gravatar.com/kmyprojects https://www.linkedin.com/in/kanakakishore
by Ranjan Dailata
Who this is for? This workflow is designed for professionals and teams who need real-time, structured insights from Perplexity Search results without manual effort. What problem is this workflow solving? This n8n workflow solves the problem of automating Perplexity Search result extraction, cleanup, summarization, and AI-enhanced formatting for downstream use like sending the results to a webhook or another system. What this workflow does Automates Perplexity Search via Bright Data Uses Bright Data’s proxy-based SERP API to run a Google Search query programmatically. Makes the process repeatable and scriptable with different search terms and regions/zones. Cleans and Extracts Useful Content The Readable Data Extractor uses LLM-based cleaning to remove HTML/CSS/JS from the response and extract pure text data. Converts messy, unstructured web content into structured, machine-readable format. Summarizes Search Results Through the Gemini Flash + Summarization Chain, it generates a concise summary of the search results. Ideal for users who don’t have time to read full pages of search results. Formats Data Using AI Agent The AI Agent acts like a virtual assistant that: - Understands search results Formats them in a readable, JSON-compatible form Prepares them for webhook delivery Delivers Results to Webhook Sends the final summary + structured search result to a webhook (could be your app, a Slack bot, Google Sheets, or CRM). Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Perplexity Search Request node with the prompt you wish to perform the search. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs 1. Change the Perplexity Search Input Default: It searches a fixed query or dataset. Customize: Accept input from a Google Sheet, Airtable, or a form. Auto-trigger searches based on keywords or schedules. 2. Customize Summarization Style (LLM Output) Default: General summary using Google Gemini or OpenAI. Customize: Add tone: formal, casual, technical, executive-summary, etc. Focus on specific sections: pricing, competitors, FAQs, etc. Translate the summaries into multiple languages. Add bullet points, pros/cons, or insight tags. 3.Choose Where the Results Go Options: Email, Slack, Notion, Airtable, Google Docs, or a dashboard. Auto-create content drafts for WordPress or newsletters. Feed into CRM notes or attach to Salesforce leads.
by Lucas Peyrin
How it works This template is an interactive, hands-on tutorial designed to demystify what an API is and how it works, right inside your n8n canvas. It uses a simple restaurant analogy to explain the core concepts: You* are the "Client" (an *HTTP Request** node). The Kitchen is the "Server" (a Webhook node). The API is the Menu and the Waiter—the set of rules for how you can ask for things and get a response. The workflow is a series of self-contained lessons. Each lesson pairs an HTTP Request node (the customer placing an order) with a Webhook node (the kitchen receiving and responding to the order) to demonstrate a key concept: The Basics: Making a simple GET request to a URL. Customizing: Using Query Parameters to filter or modify your request. Sending Data: Using the POST method and a Body to send information to the server. Identification: Using Headers and simple Authentication to prove who you are. Handling Delays: Understanding how Timeouts prevent your workflow from getting stuck. Set up steps Setup time: < 1 minute This workflow is a self-contained tutorial and requires no external services or credentials. You may want to check the Base URL. Click "Execute Workflow" to run the entire tutorial. Follow the flow from top to bottom, exploring each "Lesson". For each lesson, click on the HTTP Request node and its corresponding Webhook node to see how they are configured and what they do. Read the sticky notes next to each lesson—they contain the core explanations! That's it! Explore and have fun learning the fundamentals of APIs in an interactive way.
by Dvir Sharon
🎯 Automated TikTok Influencer Discovery & Analysis A complete n8n automation that discovers TikTok influencers using Bright Data, evaluates their fit using Claude AI, and sends personalized outreach emails. Designed for marketing teams and brands that need a scalable, intelligent way to find and connect with relevant creators. 📋 Overview This workflow provides a full-service influencer discovery pipeline: it finds TikTok profiles using search keywords, uses AI to assess alignment with your brand, and initiates contact with qualified influencers. Ideal for influencer marketing, brand partnerships, and campaign planning. ✨ Key Features 🔍 Keyword-Based Discovery** Locate TikTok influencers by specific niche-related keywords. 📊 Bright Data Integration** Access accurate TikTok profile data from Bright Data’s datasets. 🤖 AI-Powered Analysis** Claude AI evaluates each profile's fit with your brand based on bio, content, and more. 📧 Smart Email Notifications** Sends tailored outreach emails to creators deemed highly relevant. 📈 Data Storage** Google Sheets stores profile details, AI evaluation results, and outreach status. 🎯 Intelligent Filtering** Processes only influencers who meet your criteria (e.g., 5000+ followers, industry match). ⚡ Fast & Reliable** Uses professional scraping with robust error handling. 🔄 Batch Processing** Supports bulk influencer processing through a single automated flow. 🎯 What This Workflow Does Input Search Keywords** – TikTok terms for finding niche creators Business Info** – Brand description and industry Collaboration Criteria** – Follower count minimum, niche alignment Processing Steps Form Submission TikTok Discovery via Bright Data Data Extraction and Normalization Save to Google Sheets Relevance Scoring via Claude AI Filtering Based on AI Score + Follower Count Personalized Email Outreach Output Data Points | Field | Description | Example | |---------------|------------------------------------|----------------------------------| | Profile ID | TikTok profile identifier | tiktoker123456 | | Username | TikTok handle | @creativecreator | | URL | Profile link | https://tiktok.com/@creativecreator | | Description | Creator bio | "Fashion & lifestyle content..." | | Followers | Total follower count | 50,000 | | Collaboration | AI assessment of brand fit | "Highly Relevant" | | Analysis | 50-word Claude AI relevance summary| "Strong alignment with fashion..."| 🚀 Setup Instructions Prerequisites n8n (cloud or self-hosted) Bright Data account with TikTok access Google Sheets + Gmail Anthropic Claude API key 10–15 minutes setup time Step-by-Step Setup Import Workflow via JSON in n8n Configure Bright Data – Add API credentials and dataset ID Google Sheets – Setup credentials and map columns Claude AI – Insert API key and select desired model Gmail – Authenticate Gmail and update mail node settings Update Variables – Replace placeholders with business info Test & Launch – Submit a sample form and verify all outputs 📖 Usage Guide Adding Search Keywords Submit the form with search terms, business description, and industry category to trigger the workflow. Understanding AI Analysis Emails are sent only if: Collaboration status = Highly Relevant Follower count ≥ 5000 Industry alignment confirmed Claude AI returns a 50-word analysis justifying the match Customizing Filters Edit the "Find the Collaborator" prompt to adjust: Follower thresholds Industry relevance Additional metrics (e.g., engagement rate) Viewing Results Google Sheets log includes: Influencer metadata AI scores and rationale Collaboration status Email delivery timestamp 🔧 Customization Options Add More Fields:** Engagement rate, contact email, content themes Email Personalization:** Customize message templates or integrate other mail services Enhanced Filtering:** Use engagement rates, region, content frequency 🚨 Troubleshooting | Issue | Fix | |-------|-----| | Bright Data fails | Recheck API and dataset ID | | No influencer data | Adjust keywords or dataset scope | | Sheets permission error | Re-authenticate and check sharing | | Claude fails | Validate API key and prompt | | Emails not sent | Re-auth Gmail or update recipient field | | Form not triggering | Reconfirm webhook URL and permissions | Advanced Debugging Check n8n execution logs Run individual nodes for pinpointing failures Confirm all data formats Handle API rate limits Add error-catch nodes for retries 📊 Use Cases & Examples Brand Discovery:** Fashion, tech, fitness creators Competitor Insights:** Find influencers used by rival brands Campaign Planning:** Build targeted influencer lists Market Research:** Identify creator trends across regions ⚙️ Advanced Configuration Batch Execution:** Process multiple keywords with delay logic Engagement Metrics:** Scrape and calculate likes-to-follower ratios CRM Integration:** Sync qualified profiles to HubSpot, Salesforce, or Slack 📈 Performance & Limits Processing Time:** 3–5 minutes per keyword Concurrency:** 3–5 simultaneous fetches (depends on plan) Accuracy:** >95% influencer data reliability Success Rate:** 90%+ for outreach and processing
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Chat Conversations with Bright Data MCP Search Engines & Google Gemini workflow is designed for users who need real-time, AI-enhanced conversations powered by live search engine results. This workflow is tailored for: Data Analysts - Who want live, search-based data fused with AI reasoning. Marketing Researchers - Seeking up-to-the-minute market or competitor insights via conversational AI. Product Managers - Exploring user needs, market trends, and competitor analysis in real time. AI Developers - Building dynamic applications that combine live search data with intelligent conversation agents. Growth Hackers - Who need fast, conversational research tools for campaign ideation, outreach, or content creation. What problem is this workflow solving? Traditional chatbots and AI systems often rely on static, outdated data. This workflow enables AI agents to fetch live search engine data and converse intelligently about it, making interactions dynamic, accurate, and highly contextual. This workflow solves the major gaps of: Outdated Knowledge: Regular chatbots lack up-to-date information from live web searches. Manual Search Fatigue: Manually searching for information and interpreting it is time-consuming. Context Bridging: Connecting search results into meaningful, conversational replies requires human-level reasoning. What this workflow does? Accepts a user's conversational query input. Triggers a search request to Bright Data’s MCP Search Engines API (Google, Bing, etc.) based on the query. Waits for the search task to complete. Retrieves real-time search results. Feeds the search results and original question into Google Gemini. Generates a human-like, contextually accurate AI response combining live information and conversational flow. Outputs the response back into a chat app. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Also, do "Account Setup" as mentioned in the @brightdata/mcp URL. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data Web Unlocker API Token within the Environments textbox above as API_TOKEN=<your-token>. Update the HTTP Request for Webhook Notification node for sending the Webhook notification for chat responses. How to customize this workflow to your needs Change Search Engine: Add or Remove the Search Engine MCP tools based upon the Bright Data MCP Server updates. Expand Outputs: Send AI chat responses to Slack, Discord, custom chat UIs, WhatsApp, or CRM systems. Store conversation logs in a database (PostgreSQL, MongoDB, etc.) for future audits or training.