by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to create a Smart Agent that automates meeting task management by analyzing transcripts, generating tasks in Airtable, and scheduling follow-ups when necessary. Youtube Link Who is this for? This workflow is ideal for project managers, team leaders, and business owners looking to enhance productivity during meetings. It is particularly helpful for those who need to convert discussions into actionable items swiftly and effectively. What problem does this workflow solve? Managing action items from meetings can often lead to missed tasks and poor follow-up. This automation alleviates that issue by automatically generating tasks from meeting transcripts, keeping everyone informed about their responsibilities and streamlining communication. What this workflow does The workflow leverages n8n to create a Smart Agent that listens for completed meeting transcripts, processes them using AI, and generates tasks in Airtable. Key functionalities include: Capturing completed meeting events through webhooks. Extracting relevant meeting details such as transcripts and participants using API calls. Generating structured tasks from meeting discussions and sending notifications to clients. Webhook Integration: Listens for meeting completion events to trigger subsequent actions. API Requests for Data: Pulls necessary details like transcripts and participant information from Fireflies. Task and Notification Generation: Automatically creates tasks in Airtable and notifies clients of their responsibilities. Setup N8N Workflow Configure the Webhook: Set up a webhook to capture meeting completion events and integrate it with Fireflies. Retrieve Meeting Content: Use GraphQL API requests to extract meeting details and transcripts, ensuring appropriate authentication through Bearer tokens. AI Processing Setup: Define system messages for AI tasks and configure connections to the AI chat model (e.g., OpenAI's GPT) to process transcripts. Task Creation Logic: Create structured tasks based on AI output, ensuring necessary details are captured and records are created in Airtable. Client Notifications: Use an email node to notify clients about their tasks, ensuring communications are client-specific. Scheduling Follow-Up Calls: Set up Google Calendar events if follow-up meetings are required, populating details from the original meeting context.
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Extract, Transform LinkedIn Data with Bright Data MCP Server & Google Gemini workflow is an automated solution that scrapes LinkedIn content via Bright Data MCP Server then transforms the response using a Gemini LLM. The final output is sent via webhook notification and also persisted on disk. This workflow is tailored for: Data Analysts : Who require structured LinkedIn datasets for analytics and reporting. Marketing and Sales Teams : Looking to enrich lead databases, track company updates, and identify market trends. Recruiters and Talent Acquisition Specialists : Who want to automate candidate sourcing and company research. AI Developers : Integrating real-time professional data into intelligent applications. Business Intelligence Teams : Needing current and comprehensive LinkedIn data to drive strategic decisions. What problem is this workflow solving? Gathering structured and meaningful information from the web is traditionally slow, manual, and error-prone. This workflow solves: Reliable web scraping using Bright Data MCP Server LinkedIn tools. LinkedIn person and company web scrapping with AI Agents setup with the Bright Data MCP Server tools. Data extraction and transformation with Google Gemini LLM. Persists the LinkedIn person and company info to disk. Performs a Webhook notification with the LinkedIn person and company info. What this workflow does? This n8n workflow performs the following steps: Trigger: Start manually. Input URL(s): Specify the LinkedIn person and company URL. Web Scraping (Bright Data): Use Bright Data's MCP Server, LinkedIn tools for the person and company data extract. Data Transformation & Aggregation: Uses the Google LLM for handling the data transformation. Store / Output: Save results into disk and also performs a Webhook notification. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the LinkedIn URL person and company workflow. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Different Inputs: Instead of static URLs, accept URLs dynamically via webhook or form submissions. Data Extraction: Modify the LinkedIn Data Extractor node with the suitable prompt to format the data as you wish. Outputs: Update the Webhook endpoints to send the response to Slack channels, Airtable, Notion, CRM systems, etc.
by Anurag Srivastava
🧠 AI Prompt Generator Workflow – n8n Documentation Who is this for? This workflow is for AI builders, prompt engineers, developers, marketers, and no-code creators who want to convert rough user input into structured, high-quality prompts for LLMs. It’s especially useful for tools that rely on precision prompting and want to automate the discovery of intent and constraints. What problem is this workflow solving? / Use case Many users struggle to write effective prompts due to vague ideas or unclear formatting needs. This workflow: Collects structured user input. Dynamically generates clarifying questions. Returns a well-formatted AI prompt based on the user's intent and context. This ensures the generated prompt is useful for downstream AI agents without requiring technical understanding from the end user. What this workflow does Start with a branded form UI The user is shown a styled form with questions like: What do you want to build? What tools can you access? What input can be expected? What output do you expect? Analyze and generate relevant follow-up questions The workflow sends the user's answers to Google Gemini (via LangChain) which outputs 1–3 clarifying questions. These questions are parsed into a dynamic form. Loop through and collect follow-up answers Each follow-up question is shown in a form one at a time to capture additional context. Merge all inputs The base intent and follow-up responses are merged into a single context block. Generate a final AI-ready prompt The prompt generator node formats everything into a clean, six-section structure: <constraints> <role> <inputs> <tools> <instructions> <conclusions> Display the final result The finished prompt is shown in a clean UI where users can easily copy and reuse it. Setup Credentials Required Google Gemini (PaLM) API credentials (already integrated as Google Gemini(PaLM) Api account 2). Form Trigger Ensure the On form submission trigger is exposed via a webhook or public endpoint (e.g. using ngrok or deployed server). Styling Custom CSS is included in all form nodes for a beautiful UI. You can modify this to match your branding. Environment This workflow is compatible with self-hosted n8n or n8n.cloud. Webhooks must be accessible to users who will fill out the form. How to customize this workflow to your needs Change the base questions** Update the BaseQuestions form node to add or remove fields depending on your use case. Modify Gemini prompts** You can edit the system prompt inside PromptGenerator to change tone, output structure, or AI instructions. Change prompt formatting** If you use a different AI agent (like GPT, Claude, or Mistral), adjust the section labels and formatting to suit that agent’s expected input. Send results elsewhere** Add integration nodes after PromptGenerator, such as: Google Docs / Notion (to log prompts) Gmail / Slack (to notify your team) Zapier / Make (to push to other automation flows) Skip follow-up questions (optional)** If your base form collects all needed info, you can bypass the RelevantQuestions form section by modifying conditional logic. Example Output Prompt (Structure) <role> You are an AI assistant that converts videos into LinkedIn posts with a witty tone. </role> <inputs> - A short video (max 5 minutes) - Desired tone: witty - Style: both summary and quotes - Audience: general network </inputs> <tools> You do not have access to APIs or web search. </tools> <instructions> 1. Parse transcript. 2. Extract insights and quotes. 3. Write an engaging, witty LinkedIn post under 3000 characters. </instructions> <constraints> Avoid technical jargon. No generic intros. Make it platform-native. </constraints> <conclusions> Return a LinkedIn-ready post that starts with a hook and ends with hashtags.
by David Ashby
Complete MCP server exposing all ProfitWell Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every ProfitWell Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n ProfitWell Tool tool with full error handling 📋 Available Operations (2 total) Every possible ProfitWell Tool operation is included: 🔧 Company (1 operations) • Get settings for your company 🔧 Metric (1 operations) • Get a metric 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native ProfitWell Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every ProfitWell Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Jez
Summary This n8n workflow implements an AI-powered "Local Event Finder" agent. It takes user criteria (like event type, city, date, and interests), uses a suite of search tools (Brave Web Search, Brave Local Search, Google Gemini Search) and a web scraper (Jina AI) to find relevant events, and returns formatted details. The entire agent is exposed as a single, easy-to-use MCP (Multi-Capability Peer) tool, making it simple to integrate into other workflows or applications. This template cleverly combines the MCP server endpoint and the AI agent logic into a single n8n workflow file for ease of import and management. Key Features Intelligent Multi-Tool Search:** Dynamically utilizes web search, precise local search, and advanced Gemini semantic search to find events. Detailed Information via Web Scraping:** Employs Jina AI to extract comprehensive details directly from event web pages. Simplified MCP Tool Exposure:** Makes the complex event-finding logic available as a single, callable tool for other MCP-compatible clients (e.g., Roo Code, Cline, other n8n workflows). Customizable AI Behavior:** The core AI agent's behavior, tool usage strategy, and output formatting can be tailored by modifying its System Prompt. Modular Design:** Uses distinct nodes for LLM, memory, and each external tool, allowing for easier modification or extension. Benefits Simplifies Client-Side Integration:** Offloads the complexity of event searching and data extraction from client applications. Provides Richer Event Data:** Goes beyond simple search links to extract and format key event details. Flexible & Adaptable:** Can be adjusted to various event search needs and can incorporate new tools or data sources. Efficient Processing:** Leverages specialized tools for different aspects of the search process. Nodes Used MCP Trigger Tool Workflow Execute Workflow Trigger AI Agent Google Gemini Chat Model (ChatGoogleGenerativeAI) Simple Memory (Window Buffer Memory) MCP Client (for Brave Search tools via Smithery) Google Gemini Search Tool Jina AI Tool Prerequisites An active n8n instance. Google AI API Key:** For the Gemini LLM (Google Gemini Chat Model node) and the Google Gemini Search Tool. Ensure your key is enabled for these services. Jina AI API Key:** For the jina_ai_web_page_scraper node. A free tier is often available. Access to a Brave Search MCP Provider (Optional but Recommended):** This template uses MCP Client nodes configured for Brave Search via a provider like Smithery. You'll need an account/API key for your chosen Brave Search MCP provider to configure the smithery brave search credential. Alternatively, you could adapt these to call Brave Search API directly if you manage your own access, or replace them with other search tools. Setup Instructions Import Workflow: Download the JSON file for this template and import it into your n8n instance. Configure Credentials: Google Gemini LLM: Locate the Google Gemini Chat Model node. Select or create a "Google Gemini API" credential (named Google Gemini Context7 in the template) using your Google AI API Key. Google Gemini Search Tool: Locate the google_gemini_event_search node. Select or create a "Gemini API" credential (named Gemini Credentials account in the template) using your Google AI API Key (ensure it's enabled for Search/Vertex AI). Jina AI Web Scraper: Locate the jina_ai_web_page_scraper node. Select or create a "Jina AI API" credential (named Jina AI account in the template) using your Jina AI API Key. Brave Search (via MCP): You'll need an MCP Client HTTP API credential to connect to your Brave Search MCP provider (e.g., Smithery). Create a new "MCP Client HTTP API" credential in n8n. Name it, for example, smithery brave search. Configure it with the Base URL and any required authentication (e.g., API key in headers) for your Brave Search MCP provider. Locate the brave_web_search and brave_local_search MCP Client nodes in the workflow. Assign the smithery brave search (or your named credential) to both of these nodes. Activate Workflow: Ensure the workflow is active. Note MCP Trigger Path: Locate the local_event_finder (MCP Trigger) node. The Path field (e.g., 0ca88864-ec0a-4c27-a7ec-e28c5a900697) combined with your n8n webhook base URL forms the endpoint for client calls. Example Endpoint: YOUR_N8N_INSTANCE_URL/webhooks/PATH-TO-MCP-SERVER Customization AI Behavior:** Modify the "System Message" parameter within the event_finder_agent node to change the AI's persona, its strategy for using tools, or the desired output format. LLM Model:** Swap the Google Gemini Chat Model node with another compatible LLM node (e.g., OpenAI Chat Model) if desired. You'll need to adjust credentials and potentially the system prompt. Tools:** Add, remove, or replace tool nodes (e.g., use a different search provider, add a weather API tool) and update the event_finder_agent's system prompt and tool configuration accordingly. Scraping Depth:** Be mindful of the jina_ai_web_page_scraper's usage due to potential timeouts. The system prompt already guides the LLM on this, but you can adjust its usage instructions.
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to automate the parsing of invoices using n8n and LlamaParse, seamlessly capturing and storing vital billing information. Youtube Link Who is this for? This workflow is ideal for finance teams, accountants, and business operations managers who need to streamline invoice processing. It is particularly helpful for organizations seeking to reduce manual entry errors and improve efficiency in managing billing information. What problem does this workflow solve? Manually processing invoices can be time-consuming and error-prone. This automation eliminates the need for manual data entry by capturing invoice details directly from uploaded documents and storing structured data efficiently. This enhances productivity and accuracy across financial operations. What this workflow does The workflow leverages n8n and LlamaParse to automatically detect new invoices in a designated Google Drive folder, parse essential billing details, and store the extracted data in a structured format. The key functionalities include: Real-time detection of new invoices via Google Drive triggers. Automated HTTP requests to initiate parsing through Lama Cloud. Structured storage of invoice details and line items in a database for future reference. Google Drive Integration: Monitors a specific folder in Google Drive for new invoice uploads. Parsing with LlamaParse: Automatically sends invoices for parsing and processes results through webhooks. Data Storage in Airtable: Creates records for invoices and their associated line items, allowing for detailed tracking. Setup N8N Workflow Google Drive Trigger: Set up a trigger to detect new files in a specified folder dedicated to invoices. File Upload to LlamaParse: Create an HTTP request that sends the invoice file to LlamaParse for parsing, including relevant header settings and webhook URL. Webhook Processing: Establish a webhook node to handle parsed results from LlamaParse, extracting needed invoice details effectively. Invoice Record Creation: Create initial records for invoices in your database using the parsed details received from the webhook. Line Item Processing: Transform string data into structured line item arrays and create individual records for each item linked to the main invoice.
by Tiartyos
Voice Cloning Workflow - Zyphra Zonos API Who is this for? This workflow is designed for developers, content creators, and businesses looking to automate high-quality voice synthesis using AI voice cloning technology. What problem does this solve? It automates the process of generating natural-sounding speech from text using a sample voice file, eliminating the need for manual voice recording and providing consistent voice output for applications like audiobooks, virtual assistants, or content localization. What this workflow does The workflow receives text and voice cloning parameters via webhook, reads a sample voice file from your storage, sends the data to Zyphra's Zonos API for voice synthesis, and saves the generated audio file to your specified output location. Prerequisites You'll need: API key from Zyphra (obtain from https://playground.zyphra.com/settings/api-keys) Account registration at https://playground.zyphra.com Sample voice file stored on accessible disk/cloud storage n8n instance running with webhook capabilities Setup Configure your Zyphra API key in the "Call Zyphra Clone API" node under Header Parameters (Name: X-API-Key, Value: your-api-key) Ensure your sample voice files are accessible at the paths you'll specify Test the webhook endpoint is accessible Supported Audio Formats The API supports multiple output formats through the mime_type parameter: WebM** (default) - audio/webm Ogg** - audio/ogg WAV** - audio/wav MP3** - audio/mp3 or audio/mpeg MP4/AAC** - audio/mp4 or audio/aac Usage Example Endpoint: POST http://localhost:5678/webhook-test/voice-clone Headers: Content-Type: application/json Request Body: { "text": "Hello there! This voice sounds just like the sample!", "speaking_rate": 18, "sample_voice_path": "/data/output/sampleVoice.wav", "output_path": "/data/output/", "language_iso_code": "en-us", "mime_type": "audio/wav", "model": "zonos-v0.1-transformer", "emotion": { "happiness": 0.8, "neutral": 0.3, "sadness": 0.05, "disgust": 0.05, "fear": 0.05, "surprise": 0.05, "anger": 0.05, "other": 0.5 } } Parameters Required Parameters text**: Text to synthesize into speech sample_voice_path**: Path to your voice sample file output_path**: Directory where generated audio will be saved Optional Parameters (with defaults) speaking_rate**: 15 - Speech speed language_iso_code**: "en-us" - Language code mime_type**: "audio/wav" - Output audio format model**: "zonos-v0.1-transformer" - AI model to use emotion**: Object with emotion levels (0-1 scale)
by Sergey Skorobogatov
Accept YooKassa payments and log transactions in Google Sheets 🧾 Summary This workflow allows you to accept online payments via YooKassa and log both orders and transactions in Google Sheets — all without writing a single line of code. It supports full payment flow: product selection, payment initiation, webhook processing, refund updates, and payment status checks. 👥 Who is this for? This template is ideal for: Online stores with simple checkout flows Sellers of digital products or info-courses Entrepreneurs using Telegram bots or web forms Anyone needing quick payment integration with Google Sheets tracking 🎯 What problem does this workflow solve? Setting up online payments usually requires backend infrastructure. This no-code solution automates the entire payment flow: Handles product listing and price retrieval Initiates payments with email and return URL Listens for payment.succeeded and refund.succeeded events Records every action into structured Google Sheets ⚙️ What this workflow does 1. GET /products Returns a sorted list of products from a Google Sheet (products). 2. POST /payment Validates required fields (product_id, email, return_url) Checks email format Fetches product data from products Generates a unique idempotence key Sends a request to YooKassa API Saves the order into the orders sheet Returns a payment confirmation link 3. POST /yoomoney Webhook to process payment/refund events: On payment.succeeded, adds entry to transactions On refund.succeeded, updates transaction status 4. GET /status/\:id Returns real-time payment status from YooKassa 🚀 Setup Connect credentials: Google Sheets (OAuth2) YooKassa (Basic Auth using shopId and secretKey) Update the following Google Sheets: products: should contain product_id, title, price orders: for saving confirmed purchases transactions: for logging all successful or refunded payments Test endpoints using any HTTP client: Example payload for /payment: { "product_id": "abc123", "email": "user@example.com", "return_url": "https://your.site/success" } 🔧 How to customize this workflow Add delivery logic (e.g., email with product link after successful payment) Replace Google Sheets with a database (e.g., PostgreSQL) Connect Telegram or other messengers for post-payment notifications Add promo codes, discounts, or subscriptions logic 💼 Use cases Simple online checkouts Telegram bots selling access Educational product sales MVP e-commerce flows Donation or membership payments 📎 Notes ✅ Includes Sticky Notes for sections ✅ Includes error handling and validation ✅ No custom code needed except UUID generation
by Ranjan Dailata
Who is this for? This workflow automates the process of querying Bing's Copilot Search, extracting structured data from the results, summarizing the information, and sending a notification via webhook. It leverages the Microsoft Copilot to retrieve search results and integrates AI-powered tools for data extraction and summarization. What problem is this workflow solving? Data Analysts and Researchers: Who need to gather and summarize information from Bing search results efficiently. Developers and Engineers: Looking to integrate Bing search data into applications or services. Digital Marketers and SEO Specialists: Interested in monitoring search engine results for specific keywords or topics. What this workflow does Manually extracting and summarizing information from search engine results can be time-consuming and error-prone. This workflow automates the process by: Performing Bing searches using Bright Data's Bing Search API. Extracting structured data from the search results. Summarizing the extracted information using AI tools. Sending the summarized data to a specified endpoint via webhook. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Perform a Bing Copilot Request node with the prompt you wish to perform the search. Update the Structured Data Webhook Notifier node with the Webhook endpoint of your choice. Update the Summary Webhook Notifier node with the Webhook endpoint of your choice. How to customize this workflow to your needs Modify Search Queries: Adjust the search terms to target different topics or keywords. Change Data Extraction Logic: Customize the extraction process to capture specific data points from the search results. Alter Summarization Techniques: Integrate different AI models or adjust parameters to change how summaries are generated. Update Webhook Endpoints: Direct the summarized data to different endpoints as required. Schedule Workflow Runs: Set up automated triggers to run the workflow at desired intervals.
by Davide
The "Voice RAG Chatbot with ElevenLabs and OpenAI" workflow in n8n is designed to create an interactive voice-based chatbot system that leverages both text and voice inputs for providing information. Ideal for shops, commercial activities and restaurants How it works: Here's how it operates: Webhook Activation: The process begins when a user interacts with the voice agent set up on ElevenLabs, triggering a webhook in n8n. This webhook sends a question from the user to the AI Agent node. AI Agent Processing: Upon receiving the query, the AI Agent node processes the input using predefined prompts and tools. It extracts relevant information from the knowledge base stored within the Qdrant vector database. Knowledge Base Retrieval: The Vector Store Tool node interfaces with the Qdrant Vector Store to retrieve pertinent documents or data segments matching the user’s query. Text Generation: Using the retrieved information, the OpenAI Chat Model generates a coherent response tailored to the user’s question. Response Delivery: The generated response is sent back through another webhook to ElevenLabs, where it is converted into speech and delivered audibly to the user. Continuous Interaction: For ongoing conversations, the Window Buffer Memory ensures context retention by maintaining a history of interactions, enhancing the conversational flow. Set up steps: To configure this workflow effectively, follow these detailed setup instructions: ElevenLabs Agent Creation: Begin by creating an agent on ElevenLabs (e.g., named 'test_n8n'). Customize the first message and define the system prompt specific to your use case, such as portraying a character like a waiter at "Pizzeria da Michele". Add a Webhook tool labeled 'test_chatbot_elevenlabs' configured to receive questions via POST requests. Qdrant Collection Initialization: Utilize the HTTP Request nodes ('Create collection' and 'Refresh collection') to initialize and clear existing collections in Qdrant. Ensure you update placeholders QDRANTURL and COLLECTION accordingly. Document Vectorization: Use Google Drive integration to fetch documents from a designated folder. These documents are then downloaded and processed for embedding. Employ the Embeddings OpenAI node to generate embeddings for the downloaded files before storing them into Qdrant via the Qdrant Vector Store node. AI Agent Configuration: Define the system prompt for the AI Agent node which guides its behavior and responses based on the nature of queries expected (e.g., product details, troubleshooting tips). Link necessary models and tools including OpenAI language models and memory buffers to enhance interaction quality. Testing Workflow: Execute test runs of the entire workflow by clicking 'Test workflow' in n8n alongside initiating tests on the ElevenLabs side to confirm all components interact seamlessly. Monitor logs and outputs closely during testing phases to ensure accurate data flow between systems. Integration with Website: Finally, integrate the chatbot widget onto your business website replacing placeholder AGENT_ID with the actual identifier created earlier on ElevenLabs. By adhering to these comprehensive guidelines, users can successfully deploy a sophisticated voice-driven chatbot capable of delivering precise answers utilizing advanced retrieval-augmented generation techniques powered by OpenAI and ElevenLabs technologies.
by Incrementors
Google Maps Business Phone No Scraper with Bright Data & Sheets Overview This n8n workflow automates the process of scraping business phone numbers and information from Google Maps using the Bright Data API and saves the results to Google Sheets. Workflow Components 1. Form Trigger - Submit Location and Keywords Type: Form Trigger Purpose: Start the workflow when a form is submitted Fields: Location (required) Keywords (required) Configuration: Form Title: "GMB" Webhook ID: 8b72dcdf-25a1-4b63-bb44-f918f7095d5d 2. Bright Data API - Request Business Data Type: HTTP Request Purpose: Sends scraping request to Bright Data API Method: POST URL: https://api.brightdata.com/datasets/v3/trigger Query Parameters: dataset_id: gd_m8ebnr0q2qlklc02fz include_errors: true type: discover_new discover_by: location limit_per_input: 2 Headers: Authorization: Bearer BRIGHT_DATA_API_KEY Request Body: { "input": [ { "country": "{{ $json.Location }}", "keyword": "{{ $json.keywords }}", "lat": "" } ], "custom_output_fields": [ "url", "country", "name", "address", "description", "open_hours", "reviews_count", "rating", "reviews", "services_provided", "open_website", "phone_number", "permanently_closed", "photos_and_videos", "people_also_search" ] } 3. Check Scraping Status Type: HTTP Request Purpose: Check if data scraping is completed Method: GET URL: https://api.brightdata.com/datasets/v3/progress/{{ $json.snapshot_id }} Query Parameters: format: json Headers: Authorization: Bearer BRIGHT_DATA_API_KEY 4. Check If Status Ready Type: Conditional (IF) Purpose: Determine if scraping is ready or needs to wait Condition: {{ $json.status }} equals "ready" 5. Wait Before Retry Type: Wait Purpose: Pause 1 minute before checking status again Duration: 1 minute Webhook ID: 7047efad-de41-4608-b95c-d3e0203ef620 6. Check Records Exist Type: Conditional (IF) Purpose: Proceed only if business records are found Condition: {{ $json.records }} not equals 0 7. Fetch Business Data Type: HTTP Request Purpose: Get business information including phone numbers Method: GET URL: https://api.brightdata.com/datasets/v3/snapshot/{{ $json.snapshot_id }} Query Parameters: format: json Headers: Authorization: Bearer BRIGHT_DATA_API_KEY 8. Save to Google Sheets Type: Google Sheets Purpose: Store business data in Google Sheets Operation: Append Document ID: YOUR_GOOGLE_SHEET_ID Sheet Name: GMB Column Mapping: Name:** {{ $json.name }} Address:** {{ $json.address }} Rating:** {{ $json.rating }} Phone Number:** {{ $json.phone_number }} URL:** {{ $json.url }} Workflow Flow Start: User submits form with location and keywords Request: Send scraping request to Bright Data API Monitor: Check scraping status periodically Wait Loop: If not ready, wait 1 minute and check again Validate: Ensure records exist before proceeding Fetch: Retrieve the scraped business data Save: Store results in Google Sheets Setup Requirements API Keys & Credentials Bright Data API Key:** Replace BRIGHT_DATA_API_KEY with your actual API key Google Sheets OAuth2:** Configure with your Google Sheets credential ID Google Sheet ID:** Replace YOUR_GOOGLE_SHEET_ID with your actual sheet ID Google Sheets Setup Create a Google Sheet with a tab named "GMB" Ensure the following columns exist: Name Address Rating Phone Number URL Workflow Status Active:** No (currently inactive) Execution Order:** v1 Version ID:** 0bed9bf1-00a3-4eb6-bf7c-cf07bee006a2 Workflow ID:** Hm7iTSgpu2of6gz4 Notes The workflow includes a retry mechanism with 1-minute waits Data validation ensures only successful scrapes are processed All business information is automatically saved to Google Sheets The form trigger allows easy initiation of scraping jobs For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by David Olusola
📊 Google Sheets MCP Workflow – AI Meets Spreadsheets! 😄 ✨ What It Does This n8n workflow lets you chat with your spreadsheets using AI + MCP! From reading and updating data to creating sheets, it’s your smart assistant for Google Sheets 📈🤖 🚀 Cool Features 💬 Natural language commands (e.g. "Add a new lead: John Doe") ✏️ Full CRUD (Create, Read, Update, Delete) 🧠 AI-powered analysis & smart workflows 🗂️ Multi-sheet support 🔗 Works with ChatGPT, Claude, and more (via MCP) 💡 Use Cases Data Tasks: “Update status to 'Done' in row 3” Sheet Ops: “Create a ‘Marketing 2024’ sheet” Business Flows: “Summarize top sales from Q2” 🛠️ Quick Setup Import Workflow into n8n Copy the JSON In n8n → Import JSON → Paste & Save ✅ Connect Google Sheets Create a project in Google Cloud Enable Sheets & Drive APIs Create OAuth2 credentials In n8n → Add Google Sheets OAuth2 credential → Connect 🔐 Add Your Credentials Get your credential ID Open each Google Sheets node → Update with your new credential ID Link to AI (Optional 😊) MCP webhook is pre-set Plug it into your AI tool (like ChatGPT) Send test command → Watch the magic happen ✨ ✅ Test It Out Try these fun commands: 🆕 "Add entry: Jane Doe, janed@example.com" 🔍 "Read data from Sales 2024" 🧹 "Clear data from A1:C5" ➕ "Create sheet 'Budget 2025'" ❌ "Delete sheet 'Test'" 🧠 MCP Command List (AI-Callable Functions) These are the tasks the AI can perform via MCP: Add a new entry to a sheet Read data from a sheet Update a row in a sheet Delete a row from a sheet Create a new sheet Delete an existing sheet Clear data from a specific range Summarize data from a sheet using AI ⚙️ Tips & Fixes OAuth2 Errors? Re-authenticate and check scopes Confirm redirect URI is exact Permissions? Spreadsheet must be shared with edit access Use service accounts for production Webhook Not Firing? Double-check the URL Trigger it manually to test