by Danger
How it Works This meta-workflow is designed to intelligently scan all your active workflows in n8n, identify those that contain Webhook nodes, and automatically generate a Swagger (OpenAPI) specification based on them. The output Swagger document reflects all accessible endpoints from your Webhook nodes, making it easier to: Visualize your API structure Share your endpoints Integrate with tools like Postman or Swagger UI Enhanced Parameter Support If you want the Swagger to reflect request parameters (e.g., query or body fields), you can annotate your Webhook nodes using the Note section. When configured properly, these annotations enrich your Swagger documentation with parameter names, types, and descriptions. Setup Steps Add the WebhookDocs to n8n Import the WebhookDocs JSON file into your n8n instance. Activate the WebhookDocs (you can also use the test-endpoint) Annotate Webhook Nodes (Optional but Recommended) To enable parameter documentation, open the Note section of each Webhook node and add annotations in the following format: //@body field_name string description //@query field_name string description Open the page https://n8n.youristance.com/webhook/swagger
by Vijayeta Sinel
Automate document translation and ensure translation accuracy using Straker Verify, Google Drive and Slack. **How it works? ** A workflow step is set up to "watch" a Google Drive folder. When your team members place new files in this folder, they are downloaded. Straker Verify then translates them and provides a quality score. Once Straker Verify has completed this, the job info is fetched, the translation is saved to an output folder and you are notified via Slack. What problem does this solve? When using AI to translate documents, you have no idea about the quality and accuracy of the output. This template answers the question “How good is my translation?” so you have a high level of confidence before you publish. Who is this for? This workflow template is designed for businesses needing translation and localization of documents such as text docs, presentations, web pages, transcripts, video subtitles and others. Use it to build workflows that localize your content at scale while maintaining translation quality, accuracy and compliance. Set up instructions Straker Verify Integration with n8n **Connect to Straker Verify: Obtain Your API Key Sign Up/Log In:** Visit https://verify.straker.ai/ to create an account or log in. Navigate to API Keys: Go to `Verify → Settings → API Keys. Copy Your Key: Find and copy your API key. Add Key to n8n: In n8n, go to Settings → Credentials → Straker Verify. Set Up Credentials for the Straker Verify Node Open n8n and go to "Credentials". From the left sidebar, click on "Credentials". Search for "Straker Verify" and select "Straker Verify API" from the dropdown. Paste the copied access token from the Verify app and save it. Assign Credentials to the Node 1.Go to your workflow and open the Straker Verify node. 2.Select the "Straker Verify API" credentials you just created from the dropdown (Credentials to connect with), and save. ✅ You are now ready to use the workflow. Workflow Steps Step 1: Initiate Workflow – Upload Files to Google Drive 1.Upload one or more files to the designated Google Drive folder. This triggers the "New File in Google Drive" node in n8n. Step 2: Verify Token Balance The workflow checks your token balance via: Get Current User Balance User Has Enough Tokens Not enough tokens? You will receive a Slack message: Not enough tokens Please top up and re-upload your files. Step 3: Select Workflow The system fetches available workflows via: Fetch All Users Workflows No matching workflow found? You'll be notified via Slack: No workflow found Step 4: Create Project in Straker Verify The following steps are handled automatically: Fetch language/project options Download files from Google Drive Create a new project using: Create Straker Project ✅ You'll receive a Slack confirmation: > "Project created – ID: xxxxxx" Step 5: Process Completion & File Return Once files are processed by Straker: The Incoming Translation Result webhook is triggered. The workflow: Downloads processed files: Get File Content from Strakerh - Uploads them to Google Drive: Upload File to Google Drivee Step 6: Confirmation - Workflow Complete You'll receive a final Slack message: > "Workflow complete – Files are now available in Google Drive"
by Ranjan Dailata
Who is this for? This workflow is designed for HR professionals, employer branding teams, talent acquisition strategists, market researchers, and business intelligence analysts who want to monitor, understand, and act upon employee sentiment and company perception on Glassdoor. It's ideal for organizations that value real-time feedback, are tracking employer brand perception, or need summarized insights for leadership reporting without sifting through thousands of raw reviews. What problem is this workflow solving? Manually reviewing and analyzing Glassdoor reviews is tedious, subjective, and not scalable especially for larger companies or those with many subsidiaries. This workflow: Automates review collection by making a Glassdoor company request via the Bright Data Web Scrapper API. Uses Google Gemini to summarize the content. Sends an actionable summary to HR dashboards, leadership teams, or alert systems via the Webhook notification. What this workflow does Makes an HTTP Request to Glassdoor via the Bright Data Web Scrapper API. Polls the BrightData Glassdoor for the completion of the request. Downloads the Glassdoor response when a new snapshot is ready. Sends the prompt to Google Gemini for summarization. Delivers the summarized insights (strengths, weaknesses, sentiment, patterns) to a configured webhook or dashboard endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). A webhook or endpoint to receive the summary (e.g., Slack, Notion, or custom HR dashboard). How to customize this workflow to your needs Change Summary Focus by updating the Summarization of Glassdoor Response node Summarization methods and prompts to extract specific insights: Cultural feedback Leadership issues Compensation comments Exit motivation Update the HTTP Request to Glassdoor node with a specific Glassdoor Company information that you are looking for. Format the output to produce a customized summary to Markdown or HTML for rich delivery. Integrate with HR Systems BambooHR, Workday, SAP SuccessFactors via API. Google Sheets or Airtable
by Jason Krol
This is a simple webpage scraper that specifically grabs today's newest 4K Bluray Preorders as listed on the Blu-ray.com website. This is a scheduled workflow that can run every day and will post a formatted summary message of links to a Discord channel of your choice. Minimal setup required: Just create a webhook for the channel you want posted to in Discord and provide that in the final step. The timezone format step is set to East Coast (NYC) by default, feel free to change. No API keys or any special configuration needed (beyond your Discord webhook) Feel free to customize the formatting of the message that gets posted 👍 How it works: First format todays date to match the formatting used on the website Grab the HTML for the preorders page at www.blu-ray.com Filter only the hyperlinks for each Bluray on the page Then further filter only those with an html header matching today's date Format how you want the message to be sent to your Discord channel (in this case a simple list of Hyperlinks for each Title) Send to Discord! Disclaimer: This should be only for personal use.** The links go back to the blu-ray.com website, which is a good thing! Don't abuse this by slamming their site with some crazy level of automation frequency. Support the blu-ray.com website by using their affiliate links whenever you do want to preorder a title ;) This is one of my first shared templates, so it may not be super optimal or perfect but it works for my needs and hopefully you'll find some use out of it! Discord currently has a 2000 character limit on webhook messages. Some of the messages may get truncated as a result.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? This workflow template enables intelligent data extraction from ProductHunt using Bright Data’s Model Context Protocol (MCP) and processes search results with Google Gemini. This workflow is designed for individuals and teams who need automated, intelligent discovery and analysis of new tech products. It's especially valuable for: Startup Analysts & VC Researchers Growth Hackers & Marketers Recruiters & Tech Scouts Product Managers & Innovation Teams AI & Automation Enthusiasts What problem is this workflow solving? Traditional product discovery on ProductHunt is constrained by limited descriptions and requires repeated manual validation through web searches. Manually extracting and enriching this data is slow, repetitive, and error-prone. This workflow solves the problem by: Extracting real-time ProductHunt data using Bright Data’s MCP infrastructure to mimic real-user behavior and avoid blocks. Performing contextual searches on Google for a specific product on ProductHunt to gather use cases, reviews, and related information. Structuring results using Google Gemini LLM to provide human-readable insights and reduce noise. Delivering results seamlessly by saving output to disk, updating Google Sheets, and sending Webhook alerts. What this workflow does Input Field Node Define the ProductHunt category with the search term(s) you want to target. This is used to drive extraction and search operations. Agent Operation Node The agent performs two major tasks: Extract from ProductHunt Retrieves trending products from ProductHunt using Bright Data MCP Contextual Google Search for the product the agent searches Google for deeper context, including: Reviews Competitor mentions Real-world usage examples LLM Node (Google Gemini) Analyzes and summarizes extracted web content Removes noise (ads, menus, etc.) Structures content into bullet points, insights, or JSON objects Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs This workflow is flexible and modular, allowing you to adapt it for various research, product discovery, or trend analysis use cases. Below are the key customization points and how to modify them. Define Your Target Products or Topics: Change the input parameter to a specific ProductHunt category, tag, or keyword (e.g., "AI tools", "SaaS", "DevOps") Change Output Destinations : Save to Disk**: Change the file format (.json, .csv, .md) or directory path Google Sheet**: Modify sheet name, structure (columns like Product, Summary, Link) Webhook Notification**: Point to a Slack/Discord/CRM/Webhook URL with payload mapping
by Paul
Gmail AI Email Manager - Setup Guide 🎯 Workflow Overview This workflow will create an intelligent Gmail email manager that can: Monitor incoming emails via webhook Analyze email content using AI Categorize emails automatically Generate smart responses Take actions based on email content Send notifications for important emails 📋 Pre-Setup Checklist Before we build the workflow, let me gather the necessary information and validate our approach. Phase 1: Discovery & Planning [ ] Search for Gmail nodes [ ] Find AI analysis nodes [ ] Identify webhook trigger options [ ] Check notification nodes Phase 2: Configuration Requirements [ ] Gmail API credentials [ ] AI service (OpenAI/Claude) API key [ ] Webhook URL setup [ ] Email classification rules 🔧 Setup Instructions Step 1: Gmail API Setup Go to Google Cloud Console Create new project or select existing Enable Gmail API Create OAuth 2.0 credentials Add authorized redirect URI: https://your-n8n-instance.com/rest/oauth2-credential/callback Step 2: AI Service Setup Choose one of the following: OpenAI**: Get API key from platform.openai.com Claude**: Get API key from console.anthropic.com Local AI**: Set up Ollama or similar Step 3: n8n Credentials Gmail OAuth2: Add client ID, secret, and scopes AI Service: Add API key Webhook: Configure webhook URL Gmail AI Email Manager - Setup Guide 🔧 Quick Setup Checklist 1. Google Cloud Console [ ] Enable Gmail API [ ] Create OAuth2 credentials [ ] Add redirect URI: https://your-n8n.com/rest/oauth2-credential/callback [ ] Set up Gmail push notifications with Pub/Sub 2. API Keys [ ] Get OpenAI API key from platform.openai.com [ ] Create Google Sheets for logging (optional) 3. n8n Credentials [ ] Gmail OAuth2: Client ID, Secret, Scopes: gmail.readonly,gmail.modify,gmail.compose [ ] OpenAI API: Your API key 4. Gmail Labels (Create these) [ ] URGENT (red) [ ] IMPORTANT (orange) [ ] PROMOTIONAL (purple) [ ] PERSONAL (green) [ ] WORK (blue) [ ] SPAM (gray) 5. Update Workflow Values [ ] High Priority Alert: Change notification email [ ] Spreadsheet Log: Update sheet ID (if using) [ ] Webhook: Copy URL after saving workflow 6. Test [ ] Save & activate workflow [ ] Send test email to Gmail [ ] Check execution log [ ] Verify auto-categorization works That's it! Your AI email manager is ready! 🚀
by Louis Chan
How it works Transform medical documents into structured data using Google Gemini AI with enterprise-grade accuracy. Classifies document types (receipts, prescriptions, lab reports, clinical notes) Extracts text with 95%+ accuracy using advanced OCR Structures data according to medical taxonomy standards Supports multiple languages (English, Chinese, auto-detect) Tracks processing costs and quality metrics automatically Set up steps Prerequisites Google Gemini API key (get from Google AI Studio) Quick setup Import this workflow template Configure Google Gemini API credentials in n8n Test with a sample medical document URL Deploy your webhook endpoint Usage Send POST request to your webhook: { "image_url": "https://example.com/medical-receipt.jpg", "expected_type": "financial", "language_hint": "auto" } Get structured response: json{ "success": true, "result": { "documentType": "financial", "metadata": { "providerName": "Dr. Smith Clinic", "createdDate": "2025-01-06", "currency": "USD" }, "content": { "amount": 150.00, "services": [...] }, "quality_metrics": { "overall_confidence": 0.95 } } } Use cases Healthcare Organizations Medical billing automation - Process receipts and invoices automatically Insurance claim processing - Extract data from claim documents Clinical documentation - Digitize patient records and notes Data standardization - Consistent structured output format System Integrators EMR integration - Connect with existing healthcare systems Workflow automation - Reduce manual data entry by 90% Multi-language support - Handle international medical documents Quality assurance - Built-in confidence scoring and validation Supported Document Types Financial: Medical receipts, bills, insurance claims, invoices Clinical: Medical charts, progress notes, consultation reports Prescription: Prescriptions, medication lists, pharmacy records Administrative: Referrals, authorizations, patient registration Diagnostic: Lab reports, test results, screening reports Legal: Medical certificates, documentation forms
by explorium
Explorium Event-Triggered Outreach This n8n and agent-based workflow automates outbound prospecting by monitoring Explorium event data (e.g. product launches, new office opening, new investment and more), researching companies, identifying key contacts, and generating tailored sales emails leveraging the Explorium MCP server. Template Workflow Overview Node 1: Webhook Trigger Purpose: Listens for real-time product launch events pushed from Explorium's webhook system. How it works: Explorium sends HTTP POST requests containing event data The webhook payload includes company name, business ID, domain, product name, and event type Pay attention: Product launch is just one example. You can easily enroll to many more meaningful events. to learn about events and how to enroll to events, visit the events documentation. Node 2: Company Research Agent Agent Type: Tools Agent Purpose: Enrich company data after an event occurs. How it works: Uses Explorium MCP via the MCP Client tool to gather additional company data Uses Anthropic Claude (Chat Model) to process and interpret company information for downstream personalization Node 3: Employee Data Retrieval Purpose: Retrieve prospect-level data for targeting. How it works: Uses HTTP Request node to call Explorium's fetch_prospects endpoint Filters prospects by: Company business_id Departments: Product, R&D, etc... Seniority levels: owner, cxo, vp, director, senior, manager, partner, etc... Pay Attention: Follow our fetch prospect documentation for the full list of filter and best practice. Limits results to top 5 relevant employees Code nodes handle: Filtering logic Cleaning API response Formatting data for downstream agents Node 4: Conditional Branch - Prospect Data Check If Node: Checks whether prospect data was successfully retrieved Logic: If prospects found → personalized emails per person If no prospects → fallback to company-level general email Node 5A: Email Writer #1 (No Prospect Data) Agent Type: Tools Agent Purpose: Write generic outbound email using only company-level research and event info. Powered by: Anthropic Chat Model Node 5B: Loop Over Prospects → Email Writer #2 (Personalized) Agent Type: Tools Agent Purpose: Write highly personalized email for each identified employee. How it works: Loops through each individual prospect Passes company research + employee data to LLM agent Generates customized emails referencing: Prospect's title & department Product launch Role-relevant Explorium value proposition Node 6: Slack Notifications Purpose: Posts completed emails to internal Slack channel for review or testing before final deployment. Future State: Can be swapped with an email sequencing platform in production. Setup Requirements Explorium API Access MCP Client credentials for company enrichment and prospect fetching Registered webhook for event listening Get explorium api key n8n Configuration Secure environment variables for API keys & webhook secret Code nodes configured for JSON transformation, filtering & signature validation Customization Options Personalization Logic Update LLM prompt instructions to reflect ICP priorities Modify email templates based on role, department, or tenure logic Adjust fallback behavior when prospect data is unavailable API Request Tuning Adjust page_size for number of prospects retrieved Fine-tune seniority and department filters to match evolving targeting Future Expansion Swap Slack notifications for outbound email automation Integrate call task assignment directly into CRM Introduce engagement scoring feedback loop (opens, clicks, replies) Troubleshooting Tips Validate webhook signature matching to prevent unauthorized requests Ensure correct business_id is passed to prospect fetching endpoint Confirm business enrichment returns sufficient data for company researcher agents Review agent LLM responses for correct output structure and parsing consistency
by CustomJS
n8n Workflow: Invoice PDF Generator This n8n workflow captures invoice data and generates a PDF invoice, ready to be sent or saved. It uses a webhook to trigger the process, preprocesses the invoice data, and converts it to a PDF using HTML and custom styling. @custom-js/n8n-nodes-pdf-toolkit Features: Webhook Trigger**: Receives incoming data, including invoice details. Preprocessing**: Transforms the invoice data into HTML format. HTML to PDF Conversion**: Converts the preprocessed HTML into a styled PDF document. Response**: Sends the generated PDF back to the webhook response. Notice Community nodes can only be installed on self-hosted instances of n8n. Requirements Self-hosted** n8n instance A CustomJS API key for website screenshots. Invoice data** for PDF generation Workflow Steps: Webhook Trigger: Accepts incoming data (e.g., invoice number, recipient details, itemized list). This data is passed to the next node for processing. Set Data Node: Configures initial values for the invoice, including the recipient, sender, invoice number, and the items on the invoice. The invoice details include information like description, unit price, and quantity. Preprocess Node: Processes the raw data to format it correctly for HTML. This includes splitting addresses and converting the items into an HTML table format. HTML to PDF Conversion: Converts the generated HTML into a PDF document. The HTML includes a header, a detailed invoice table, and a footer with contact information. Respond to Webhook: Returns the generated PDF as a response to the initial webhook request. Setup Guide: 1. Configure CustomJS API Sign up at CustomJS. Retrieve your API key from the profile page. Add your API key as n8n credentials. 2. Design Workflow Create a Webhook: Set up a webhook to trigger the workflow when invoice data is received. Prepare Data: Ensure the incoming request contains fields like "Invoice No", "Bill To", "From", and "Details" (list of items with price and quantity). Customize the HTML: The HTML template for the invoice includes custom styling to give the invoice a professional look. Convert to PDF: The HTML to PDF node is configured with the data generated from the preprocessing step to convert the invoice HTML to a PDF format. Example Invoice Data: { "Invoice No": "1", "Bill To": "John Doe\n1234 Elm St, Apt 567\nCity, Country, 12345", "From": "ABC Corporation\n789 Business Ave\nCity, Country, 67890", "Details": [ { "description": "Web Hosting", "price": 150, "qty": 2 }, { "description": "Domain", "price": 15, "qty": 5 } ], "Email": "support@mycompany.com" } Result PDF File
by InfraNodus
Set Up ElevenLabs Voice Chat Agent using Graph RAG Knowledge Graphs as Experts This workflow creates an AI voice chatbot agent that has access to several knowledge bases at the same time (used as "experts"). These knowledge bases are provided using the InfraNodus GraphRAG using the knowledge graphs and providing high-quality responses without the need to set up complex RAG vector store workflows. We use ElevenLabs to set up a voice agent that can be embedded to any website or used via their API. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up (no complex data import workflows needed) and to update with new knowledge A knowledge graph has a holistic overview of your knowledge base Better retrieval of relations between the document chunks = higher quality responses Ability to reuse in other n8n workflows How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. The user's prompt is received from the ElevenLabs Conversational AI agent via an n8n Webhook, which also takes care of the voice interaction. The response from n8n is then sent to the Webhook, which is polled by the ElevenLabs voice agent. This agent processes the response and provides the final answer. Here's a description step by step: The user submits a question using ElevenLabs voice interface The question is sent via the knowledge_base tool in ElevenLabs to the n8n Webhook with the POST request containing the user's prompt and sessionID for Chat Memory node in n8n. The n8n AI agent node checks a list of tools it has access to. Each tool has a description of the knowledge auto-generated by InfraNodus (we call each tool an "expert"). The n8n AI agent decides which tool should be used to generate a response. It may reformulate user's query to be more suitable for the expert. The query is then sent to the InfraNodus HTTP node endpoint, which will query the graph that corresponds to that expert. Each InfraNodus GraphRAG expert provides a rich response that takes the whole context into account and provides a response from each expert (graph) along with a list of relevant statements retrieved using a combination or RAG and GraphRAG. The n8n AI Agent node integrates the responses received from the experts to produce the final answer. The final answer is sent back to the Webhook endpoint ElevenLabs conversational AI agent picks up the response arriving from the knowledge_base tool via the webhook and then condenses it for conversational format and transforms text into voice. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow You will also need to set up an ElevenLabs account and to set up a conversational AI agent there. See the Post note in the n8n workflow for a complete step-by-step description or our support article on setting up ElevenLabs AI voice agent Once the voice AI agent is ready, you might want to combine it with a text AI chatbot workflow so your users have a choice between the text and voice interaction. In that case, you may be interested to use our free open-source website popup chat widget popupchat.dev where you can create an embed code to add to your blog or website and allow the user to choose between the text and voice interaction. Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key An ElevenLabs account FAQ 1. How many "experts" should I aim for? We recommend to aim for the number of experts as the optimal number of people in a team, which is usually 2-7. If you add more experts, your AI orchestrating agent will have troubles choosing the most suitable "expert" tool for the user's query. You can mitigate this by specifying in the AI agent description that it can choose maximum 3-7 experts to provide a response. 2. Why use InfraNodus GraphRAG and not standard vector store for knowledge? First, vector stores are complex to set up and to update. You'd need a separate workflow for that, decide on the vector dimensions, add metadata to your knowledge, etc. With InfraNodus, you have a complete RAG / GraphRAG solution under the hood that is easy to set up and provides high-quality responses that takes the overall structure and the relations between your ideas into account. 3 Why not use ElevenLabs' own knowledge? One of the reasons is that you want your knowledge base to be in one place so you can reuse it in other n8n workflows. Another reason is that you will not have such a good separation between the "experts" when you converse with the agent. So the answers you get will be based on top matches from all the books / articles you upload, while with the InfraNodus GraphRAG setup you can better control which graphs are consulted as experts and have an explicit way to display this data. Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available on our GitHub repo for n8n workflows. Check out the complete setup guide for this workflow at https://support.noduslabs.com/hc/en-us/articles/20318967066396-How-to-Build-a-Text-Voice-AI-Agent-Chatbot-with-n8n-Elevenlabs-and-InfraNodus Also check out the video tutorial with a demo:
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? The Search Engine Intelligence Extractor is a powerful n8n automation that leverages Bright Data’s MCP based AI Agents to simulate human-like searches across Google, Bing, and Yandex, and then distills clean, structured insights using Google Gemini. This workflow is tailored for: SEO analysts researching competitors or market trends Market researchers needing real-time search visibility Journalists & content writers gathering contextual insights AI developers creating intelligent assistants Digital marketers tracking brand mentions or news What problem is this workflow solving? Traditional scraping of search engines is often blocked, cluttered, or filled with irrelevant information. Manually analyzing and cleaning this data for insight is time-consuming. This workflow solves the problem by: Simulating real user search behavior via Bright Data MCP based AI Agent Performing multi-platform search (Google, Bing, Yandex) in one unified flow Extracting clean, human-readable results (stripping ads, navigation, etc.) Structuring the content using Google Gemini LLM Automating delivery via Webhook or saving to disk What this workflow does Input Fields Node: Accepts the search query Accepts action for example - Perform a google search. Replace the action with bing, yandex etc. for other search providers Accepts Webhook notification URL Bright Data MCP Agent Execution: Triggers Bright Data’s intelligent search agent Handles search navigation, result loading, pagination Human Readable Data Extractor: Cleanses HTML, removes ads, footers, irrelevant links Produces a readable narrative of results Final Output Handling: Saves the processed response to disk Sends the structured data to a Webhook for real-time use Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Add Scheduled Execution Add a Cron trigger to run this workflow on a set schedule (e.g., daily/weekly keyword tracking). Push Results to Custom Destinations Connect output to: Google Sheets (for analytics or dashboards) PostgreSQL or MySQL DBs (for structured storage) Notion or Airtable (for content pipelines) Slack or Email (for alerting teams) Customize Webhook Notifications Update the Webhook URL in the notification node to push processed results to external APIs, CRMs, or real-time dashboards.
by Paulo Ramirez
Upload your CRM contacts to telli and schedule AI voice-agent calls Introduction to telli and AI Voice-Agent Calls telli is an innovative platform that provides AI-powered voice agents capable of making calls and performing tasks tailored to specific customer use cases. These AI voice-agents can handle a wide range of communication tasks, from appointment scheduling to customer support, with remarkable efficiency and natural conversation flow. This template is designed for businesses and organizations looking to automate their outbound calling processes using telli's AI voice-agents in conjunction with Airtable as their CRM. It solves the problem of manual call scheduling and data transfer between your CRM and calling system, saving time and reducing human error. Prerequisites telli account Airtable base with contact information n8n instance Step-by-Step Setup Guide n8n Setup: Create a new workflow in n8n. Add the Airtable node to connect to your CRM table. telli API Configuration: Log in to your telli dashboard. Locate and copy your API key under telli - Settings - API/Webhooks. Workflow Configuration: Add two HTTP Request nodes to your n8n workflow. Set the "Authorization" header in both POST requests, replacing the value with your telli API key. Configure the first request to use the /add-contact endpoint. Set up the second request to use the /schedule-call endpoint. Data Mapping: Map the relevant fields from your Airtable node to the telli API requests. Testing and Activation: Run a test execution of your workflow. Once satisfied with the results, activate the workflow. API Endpoint Details Add Contact Endpoint URL**: https://api.telli.com/v1/add-contact Method**: POST Headers**: Authorization: YOUR-API-KEY Content-Type: application/json Payload**: { "external_contact_id": "string", "salutation": "string", "first_name": "string", "last_name": "string", "phone_number": "string", "email": "jsmith@example.com", "contact_details": {}, "timezone": "string" } Schedule Call Endpoint URL**: https://api.telli.com/v1/schedule-call Method**: POST Headers**: Authorization: YOUR-API-KEY Content-Type: application/json Payload**: { "contact_id": TELLI-CONTACT-ID, "agent_id": "string", "max_retry_days": 123, "call_details": { "message": "Hello, this is your friendly reminder!", "questions": [ { "fieldName": "email", "neededInformation": "email of the customer", "exampleQuestion": "What is your email address?", "responseFormat": "email string" } ] }, "override_from_number": "string" } Use Cases This template is versatile and can be applied to various scenarios, including: Lead Qualification*: Automatically schedule calls to new leads entered in your CRM. Appointment Reminders*: Set up calls to remind clients of upcoming appointments. Customer Feedback*: Schedule follow-up calls after product deliveries or service completions. Uploading Multiple Contacts For bulk operations, you have two options: Loop Node: Include a Loop node in your n8n workflow to process multiple contacts sequentially. Batch Endpoints: Instead of /add-contact and /schedule-call, use telli's batch endpoints: /add-contacts-batch: Add multiple contacts within an array. /schedule-calls-batch: Schedule multiple calls at once. Example of batch endpoint usage: { "contacts": [ {"name": "John Doe", "phone": "+1234567890"}, {"name": "Jane Smith", "phone": "+1987654321"} ] } By leveraging this template, you can seamlessly integrate your Airtable CRM with telli's powerful AI voice-agents, automating your outbound calling process and enhancing your customer communication strategy.