by slow-groovin@api2o.com
AI Comprehensive Research on User's Query with Gemini and Web Search What is this? Perform comprehensive research on a user's query by dynamically generating search terms, querying the web using Google Search (by Gemini) , reflecting on the results to identify knowledge gaps, and iteratively refining its search until it can provide a well-supported answer with citations. (like Perplexity) This workflow is a reproduction of gemini-fullstack-langgraph-quickstart in N8N. The gemini‑fullstack‑langgraph‑quickstart is a demo by the Google‑Gemini team that showcases how to build a powerful full‑stack AI agent using Gemini and LangGraph How It Works Generate Query 💬 generates one or more search queries tasks based on the User's question. uses Gemini 2.0 Flash Web Research 🌐 execute web search tasks using the native Google Search API tool in combination with Gemini 2.0 Flash. Reflection 📚 Identifies knowledge gaps and generates potential follow-up queries. Setup Configure API Credentials: Create Google Gemini(PaLM) Api Credential using you own Gemini key Connect the credential with three nodes: Google Gemini Chat Model and GeminiSearch and reflection Configure Redis Source: prepare a Redis service that can be accessed by n8n Create Redis Crediential and connect it with all Redis node Customize Try using different Gemini models. Try modifying the parameters number_of_initial_queries and max_research_loops. Why use Redis? Use Redis as an external storage to maintain global variables (counter, search results, etc.) This workflow contains a loop process, which need global variables (as State in LangGraph). It is difficult to achieve global variables management without external storage in n8n.
by Jez
Workflow: Automated Weekly Google Calendar Summary via Email with AI ✨🗓️📧 Get a personalized, AI-powered summary of your upcoming week's Google Calendar events delivered straight to your inbox! This workflow automates the entire process, from fetching events to generating an intelligent summary and emailing it to you. 🌟 Overview This n8n workflow connects to your Google Calendar, retrieves events for the upcoming week (Monday to Sunday, based on the day the workflow runs), uses Google Gemini AI to create a well-structured and insightful summary, and then emails this summary to you. It's designed to help you start your week organized and aware of your commitments. Key Features: Automated Weekly Summary:** Runs on a schedule (default: weekly) to keep you updated. AI-Powered Insights:** Leverages Google Gemini to not just list events, but to identify important ones and offer a brief weekly outlook. Personalized Content:** Uses your specified timezone, locale, name, and city for accurate and relevant information. Clear Formatting:** Events are grouped by day and displayed chronologically with start and end times. Important events are highlighted. Email Delivery:** Receive your schedule directly in your inbox in a clean HTML format. Customizable:** Easily adapt to your specific calendar, AI preferences, and email settings. ⚙️ How It Works: Step-by-Step The workflow consists of the following nodes, working in sequence: weekly_schedule (Schedule Trigger): What it does: Initiates the workflow. Default: Triggers once a week at 12:00 PM. You can adjust this to your preference (e.g., Sunday evening or Monday morning). locale (Set Node): What it does: This is a crucial node for you to configure! It sets user-specific parameters like your preferred language/region (users-locale), timezone (users-timezone), your name (users-name), and your home city (users-home-city). These are used throughout the workflow for correct date/time formatting and personalizing the AI prompt. date-time (Set Node): What it does: Dynamically generates various date and time strings based on the current execution time and the locale settings. This is used to define the precise 7-day window (from the current day to 7 days ahead, ending at midnight) for fetching calendar events. get_next_weeks_events (Google Calendar Node): What it does: Connects to your specified Google Calendar and fetches all events within the 7-day window calculated by the date-time node. Requires: Google Calendar API credentials and the ID of the calendar you want to use. simplify_evens_json (Code Node): What it does: Runs a small JavaScript snippet to clean up the raw event data from Google Calendar. It removes several fields that aren't needed for the summary (like htmlLink, etag, iCalUID), making the data more concise for the AI. aggregate_events (Aggregate Node): What it does: Takes all the individual (and now simplified) event items and groups them into a single JSON array called eventdata. This is the format the AI agent expects for processing. Google Gemini (LM Chat Google Gemini Node): What it does: This node is the connection point to the Google Gemini language model. Requires: Google Gemini (or PaLM) API credentials. event_summary_agent (Agent Node): What it does: This is where the magic happens! It uses the Google Gemini model and a detailed system prompt to generate the weekly schedule summary. The Prompt Instructs the AI to: Start with a friendly greeting. Group events by day (Monday to Sunday) for the upcoming week, using the user's timezone and locale. Format event times clearly (e.g., 09:30 AM - 10:30 AM: Event Summary). Identify and prefix "IMPORTANT:" to events with keywords like "urgent," "deadline," "meeting," etc., in their summary or description. Conclude with a 1-2 sentence helpful insight about the week's schedule. Process the input eventdata (the JSON array of calendar events). Markdown (Markdown to HTML Node): What it does: Converts the text output from the event_summary_agent (which is generated in Markdown format for easy structure) into HTML. This ensures the email body is well-formatted with proper line breaks, lists, and emphasis. send_email (Email Send Node): What it does: Sends the final HTML summary to your specified email address. Requires: SMTP (email sending) credentials and your desired "From" and "To" email addresses. 🚀 Getting Started: Setup Instructions Follow these steps to get the workflow up and running: Import the Workflow: Download the workflow JSON file. In your n8n instance, go to "Workflows" and click the "Import from File" button. Select the downloaded JSON file. Configure Credentials: You'll need to set up credentials for three services. In n8n, go to "Credentials" on the left sidebar and click "Add credential." Google Calendar API: Search for "Google Calendar" and create new credentials using OAuth2. Follow the authentication flow. Once created, select these credentials in the get_next_weeks_events node. Google Gemini (PaLM) API: Search for "Google Gemini" or "Google PaLM" and create new credentials. You'll typically need an API key from Google AI Studio or Google Cloud. Once created, select these credentials in the Google Gemini node. SMTP / Email: Search for your email provider (e.g., "SMTP," "Gmail," "Outlook") and create credentials. This usually involves providing your email server details, username, and password/app password. Once created, select these credentials in the send_email node. ‼️ IMPORTANT: Customize User Settings in the locale Node: Open the locale node. Update the following values in the "Assignments" section: users-locale: Set your locale string (e.g., "en-AU" for English/Australia, "en-US" for English/United States, "de-DE" for German/Germany). This affects how dates, times, and numbers are formatted. users-timezone: Set your timezone string (e.g., "Australia/Sydney", "America/New_York", "Europe/London"). This is critical for ensuring event times are displayed correctly for your location. users-name: Enter your name (e.g., "Bob"). This is used to personalize the email greeting. users-home-city: Enter your home city (e.g., "Sydney"). This can be used for additional context by the AI. Configure the get_next_weeks_events (Google Calendar) Node: Open the node. In the "Calendar" parameter, you need to specify which calendar to fetch events from. The default might be a placeholder like c_4d9c2d4e139327143ee4a5bc4db531ffe074e98d21d1c28662b4a4d4da898866@group.calendar.google.com. Change this to your primary calendar (often your email address) or the specific Calendar ID you want to use. You can find Calendar IDs in your Google Calendar settings. Configure the send_email Node: Open the node. Set the fromEmail parameter to the email address you want the summary to be sent from. Set the toEmail parameter to the email address(es) where you want to receive the summary. You can also customize the subject line if desired. (Optional) Customize the AI Prompt in event_summary_agent: If you want to change how the AI summarizes events (e.g., different keywords for important events, a different tone, or specific formatting tweaks), you can edit the "System Message" within the event_summary_agent node's parameters. (Optional) Adjust the Schedule in weekly_schedule: Open the weekly_schedule node. Modify the "Rule" to change when and how often the workflow runs (e.g., a specific day of the week, a different time). Activate the Workflow: Once everything is configured, toggle the "Active" switch in the top right corner of the workflow editor to ON. 📬 What You Get You'll receive an email (based on your schedule) with a subject like "Next Week Calendar Summary : [Start Date] - [End Date]". The email body will contain: A friendly greeting. Your schedule for the upcoming week (Monday to Sunday), with events listed chronologically under each day. Event times displayed in your local timezone (e.g., 09:30 AM - 10:30 AM: Team Meeting). Priority events clearly marked (e.g., IMPORTANT: 02:00 PM - 03:00 PM: Project Deadline Review). A brief, insightful observation about your week's schedule. 🛠️ Troubleshooting & Notes Timezone is Key:** Ensure your users-timezone in the locale node is correct. This is the most common source of incorrect event times. Google API Permissions:** When setting up Google Calendar and Gemini credentials, make sure you grant the necessary permissions. AI Output Varies:** The AI-generated summary can vary slightly each time. The prompt is designed to guide it, but LLMs have inherent creativity. Calendar Event Details:** The quality of the summary (especially for identifying important events) depends on how detailed your calendar event titles and descriptions are. Including keywords like "meeting," "urgent," "prepare for," etc., in your events helps the AI. 💬 Feedback & Contributions Feel free to modify and enhance this workflow! If you have suggestions, improvements, or run into issues, please share them in the n8n community. Happy scheduling!
by Naveen Choudhary
Who is this for? Marketing, content, and enablement teams that need a quick, human-readable summary of every new video published by the YouTube channels they care about—without leaving Slack. What problem does this workflow solve? Manually checking multiple channels, skimming long videos, and pasting the highlights into Slack wastes time. This template automates the whole loop: detect a fresh upload from your selected channels → pull subtitles → distill the key take-aways with GPT-4o-mini → drop a neatly-formatted digest in Slack. What this workflow does Schedule Trigger fires every 10 min, then grabs a list of YouTube RSS feeds from a Google Sheet. HTTP + XML fetch & parse each feed; only brand-new videos continue. YouTube API fetches title/description, RapidAPI grabs English subtitles. Code nodes build an AI payload; OpenAI returns a JSON summary + article. A formatter turns that JSON into Slack Block Kit, and Slack posts it. Processed links are appended back to the “Video Links” sheet to prevent dupes. Setup Make a copy of this Google Sheet and connect a Google Sheets OAuth2 credential with edit rights. Slack App: create → add chat:write, channels:read, app_mention; enable Event Subscriptions; install and store the Bot OAuth token in an n8n Slack credential. RapidAPI key for https://yt-api.p.rapidapi.com/subtitles (300 free calls/mo) → save as HTTP Header Auth. OpenAI key → save in an OpenAI credential. Add your RSS feed URLs to the “RSS Feed URLs” tab; press Execute Workflow. How to customise Adjust the schedule interval or freshness window in “If newly published”. Swap the OpenAI model or prompt for shorter/longer digests. Point the Slack node at a different channel or DM. Extend the AI payload to include thumbnails or engagement stats. Use-case ideas Product marketing**: Instantly brief sales & CS teams when a competitor uploads a feature demo. Internal learning hub**: Auto-summarise conference talks and share bullet-point notes with engineers. Social media managers**: Get ready-to-post captions and key moments for re-purposing across platforms.
by Dvir Sharon
🎯 Automated TikTok Influencer Discovery & Analysis A complete n8n automation that discovers TikTok influencers using Bright Data, evaluates their fit using Claude AI, and sends personalized outreach emails. Designed for marketing teams and brands that need a scalable, intelligent way to find and connect with relevant creators. 📋 Overview This workflow provides a full-service influencer discovery pipeline: it finds TikTok profiles using search keywords, uses AI to assess alignment with your brand, and initiates contact with qualified influencers. Ideal for influencer marketing, brand partnerships, and campaign planning. ✨ Key Features 🔍 Keyword-Based Discovery** Locate TikTok influencers by specific niche-related keywords. 📊 Bright Data Integration** Access accurate TikTok profile data from Bright Data’s datasets. 🤖 AI-Powered Analysis** Claude AI evaluates each profile's fit with your brand based on bio, content, and more. 📧 Smart Email Notifications** Sends tailored outreach emails to creators deemed highly relevant. 📈 Data Storage** Google Sheets stores profile details, AI evaluation results, and outreach status. 🎯 Intelligent Filtering** Processes only influencers who meet your criteria (e.g., 5000+ followers, industry match). ⚡ Fast & Reliable** Uses professional scraping with robust error handling. 🔄 Batch Processing** Supports bulk influencer processing through a single automated flow. 🎯 What This Workflow Does Input Search Keywords** – TikTok terms for finding niche creators Business Info** – Brand description and industry Collaboration Criteria** – Follower count minimum, niche alignment Processing Steps Form Submission TikTok Discovery via Bright Data Data Extraction and Normalization Save to Google Sheets Relevance Scoring via Claude AI Filtering Based on AI Score + Follower Count Personalized Email Outreach Output Data Points | Field | Description | Example | |---------------|------------------------------------|----------------------------------| | Profile ID | TikTok profile identifier | tiktoker123456 | | Username | TikTok handle | @creativecreator | | URL | Profile link | https://tiktok.com/@creativecreator | | Description | Creator bio | "Fashion & lifestyle content..." | | Followers | Total follower count | 50,000 | | Collaboration | AI assessment of brand fit | "Highly Relevant" | | Analysis | 50-word Claude AI relevance summary| "Strong alignment with fashion..."| 🚀 Setup Instructions Prerequisites n8n (cloud or self-hosted) Bright Data account with TikTok access Google Sheets + Gmail Anthropic Claude API key 10–15 minutes setup time Step-by-Step Setup Import Workflow via JSON in n8n Configure Bright Data – Add API credentials and dataset ID Google Sheets – Setup credentials and map columns Claude AI – Insert API key and select desired model Gmail – Authenticate Gmail and update mail node settings Update Variables – Replace placeholders with business info Test & Launch – Submit a sample form and verify all outputs 📖 Usage Guide Adding Search Keywords Submit the form with search terms, business description, and industry category to trigger the workflow. Understanding AI Analysis Emails are sent only if: Collaboration status = Highly Relevant Follower count ≥ 5000 Industry alignment confirmed Claude AI returns a 50-word analysis justifying the match Customizing Filters Edit the "Find the Collaborator" prompt to adjust: Follower thresholds Industry relevance Additional metrics (e.g., engagement rate) Viewing Results Google Sheets log includes: Influencer metadata AI scores and rationale Collaboration status Email delivery timestamp 🔧 Customization Options Add More Fields:** Engagement rate, contact email, content themes Email Personalization:** Customize message templates or integrate other mail services Enhanced Filtering:** Use engagement rates, region, content frequency 🚨 Troubleshooting | Issue | Fix | |-------|-----| | Bright Data fails | Recheck API and dataset ID | | No influencer data | Adjust keywords or dataset scope | | Sheets permission error | Re-authenticate and check sharing | | Claude fails | Validate API key and prompt | | Emails not sent | Re-auth Gmail or update recipient field | | Form not triggering | Reconfirm webhook URL and permissions | Advanced Debugging Check n8n execution logs Run individual nodes for pinpointing failures Confirm all data formats Handle API rate limits Add error-catch nodes for retries 📊 Use Cases & Examples Brand Discovery:** Fashion, tech, fitness creators Competitor Insights:** Find influencers used by rival brands Campaign Planning:** Build targeted influencer lists Market Research:** Identify creator trends across regions ⚙️ Advanced Configuration Batch Execution:** Process multiple keywords with delay logic Engagement Metrics:** Scrape and calculate likes-to-follower ratios CRM Integration:** Sync qualified profiles to HubSpot, Salesforce, or Slack 📈 Performance & Limits Processing Time:** 3–5 minutes per keyword Concurrency:** 3–5 simultaneous fetches (depends on plan) Accuracy:** >95% influencer data reliability Success Rate:** 90%+ for outreach and processing
by Roman Rozenberger
How it works • Extract AI Overviews from Google Search - Receives data from browser extension via webhook • Convert HTML to Markdown - Automatically processes and cleans AI Overview content • Store in Google Sheets - Archives all extracted AI Overviews with metadata and sources • Generate SEO Guidelines - AI analyzes page content vs AI Overview to suggest improvements • Automate Analysis - Batch process multiple URLs and schedule regular checks Set up steps • Import workflow - Load the JSON template into your n8n instance (2 minutes) • Configure Google Sheets - Set up OAuth connection and create spreadsheet with required columns (5 minutes) • Set up AI provider - Add OpenRouter API credentials for Gemini 2.5 Pro (3 minutes) • Install browser extension - Deploy the companion Chrome/Firefox extension for data extraction (5 minutes) • Test webhook endpoint - Verify the connection between extension and n8n workflow (2 minutes) Total setup time: ~15 minutes What you'll need: Google account for Sheets integration Google Sheet template with required columns OpenRouter API key for Gemini 2.5 Pro model access Browser extension: Chrome Extension or Firefox Add-on n8n instance (local or cloud) Use cases: SEO agencies** - Monitor AI Overview presence for client keywords Content marketers** - Analyze what content gets featured in AI Overviews E-commerce** - Track AI Overview coverage for product-related searches Research** - Build datasets of AI Overview content across different topics The workflow comes with a free browser extension (Chrome | Firefox) that automatically extracts AI Overview content from Google Search and sends it via webhook to your n8n workflow for processing and analysis. GitHub Repository: https://github.com/romek-rozen/ai-overview-extractor/ Detailed Setup Instructions - AI Overview Extractor Prerequisites n8n instance** (local or cloud) - version 1.95.3+ Google account** for Sheets integration OpenRouter API account** for Gemini 2.5 Pro access Browser** (Chrome/Firefox) for the extension Step 1: Import the Workflow Open n8n and navigate to Workflows Click "Add workflow" → "Import from JSON" Upload the AI_OVERVIES_EXTRACTOR_TEMPLATE.json file Save the workflow Step 2: Configure Google Sheets Create Google Sheets Document Create new Google Sheet with these columns: extractedAt | searchQuery | sources | markdown | myURL | task | guidelines | key Here is public google sheet template: https://docs.google.com/spreadsheets/d/15xqZ2dTiLMoyICYnnnRV-HPvXfdgVeXowr8a7kU4uHk/edit?gid=0#gid=0 Copy the Google Sheets URL (you'll need it for the workflow) Set up Google Sheets Credentials In n8n, go to Settings → Credentials Click "Add credential" → "Google Sheets OAuth2 API" Follow the OAuth setup to authorize n8n access to Google Sheets Name the credential (e.g., "Google Sheets AI Overview") Configure Google Sheets Nodes Update these nodes with your Google Sheets URL: Get URLs to Analyze Save AI Overview to Sheets Save SEO Guidelines to Sheets In each node: Set documentId to your Google Sheets URL Set sheetName to your Google Sheets URL Select your Google Sheets credential Step 3: Configure AI Provider (OpenRouter) Get OpenRouter API Key Sign up at https://openrouter.ai/ Generate API key in your account settings Add credits to your account Set up OpenRouter Credentials In n8n, go to Settings → Credentials Click "Add credential" → "OpenRouter API" Enter your API key Name the credential (e.g., "OpenRouter AI Overview") Configure OpenRouter Node Select the Gemini 2.5 Pro Model node Choose your credential from the dropdown Verify the model (default: google/gemini-2.5-pro-preview) Step 4: Install Browser Extension Install in Chrome Official Extension (Recommended) Visit: https://chromewebstore.google.com/detail/ai-overview-extractor/cbkdfibgmhicgnmmdanlhnebbgonhjje Click "Add to Chrome" Install in Firefox Official Add-on Visit: https://addons.mozilla.org/en-US/firefox/addon/ai-overview-extractor/ Click "Add to Firefox" Step 5: Configure Webhook Connection Get Webhook URL In n8n workflow, click on the Webhook node Copy the webhook URL (should be like: http://localhost:5678/webhook/ai-overview-extractor-template-123456789) Configure Extension Go to Google Search and perform any search with AI Overview Click the browser extension button (AI Overview Extractor) In webhook configuration section, paste your webhook URL Click "Test" - should show ✅ Test successful Click "Save" to store the configuration Step 6: Activate and Test Activate Workflow In n8n, toggle the workflow to "Active" (top right switch) Verify all nodes are properly configured Test End-to-End Go to Google Search Search for something that shows AI Overview Use the extension to extract AI Overview Send via webhook - check your Google Sheets for the data Verify the markdown conversion worked correctly Optional: Batch Analysis Setup For SEO Analysis Features In your Google Sheets, add URLs in the myURL column Set task column to "create guidelines" Run the workflow manually or wait for the 15-minute scheduler Check guidelines column for AI-generated SEO recommendations Troubleshooting Webhook Issues Ensure n8n is running on port 5678 Check if workflow is activated Verify webhook URL format Google Sheets Errors Confirm OAuth credentials are working Check sheet URL format Verify column names match exactly Ensure nodes Get URLs to Analyze, Save AI Overview to Sheets, and Save SEO Guidelines to Sheets are properly configured OpenRouter Issues Check API key validity Ensure sufficient account credits Try different models if Gemini 2.5 Pro fails Verify the Gemini 2.5 Pro Model node is properly connected Extension Problems Check browser console for errors Verify extension is properly installed Ensure you're on google.com/search pages Confirm webhook URL is correctly configured in extension Next Steps Customize AI prompts** in the Generate SEO Recommendations node for your specific needs Adjust scheduler frequency** (default: 15 minutes) Add more URL analysis** by populating Google Sheets Monitor usage** and API costs Support GitHub Issues**: https://github.com/romek-rozen/ai-overview-extractor/issues n8n Community**: https://community.n8n.io/ Template Documentation**: Check the included README files
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for This workflow automates the real-time extraction of Job Descriptions and Salary Information from job listing pages using Bright Data MCP and analyzes content using OpenAI GPT-4o mini. This workflow is ideal for: Recruiters & HR Tech Startups**: Automate job data collection from public listings Market Intelligence Teams**: Analyze compensation trends across companies or geographies Job Boards & Aggregators**: Power search results with structured, enriched listings AI Workflow Builders**: Extend to other career platforms or automate resume-job match analysis Analysts & Researchers**: Track hiring signals and salary benchmarks in real time What problem is this workflow solving? Traditional scraping of job portals can be challenging due to cluttered content, anti-scraping measures, and inconsistent formatting. Manually analyzing salary ranges and job descriptions is tedious and error-prone. This workflow solves the problem by: Simulating user behavior using Bright Data MCP Client to bypass anti-scraping systems Extracting structured, clean job data in Markdown format Using OpenAI GPT-4o mini to analyze and extract precise salary details and refined job descriptions Merging and formatting the result for easy consumption Delivering final output via webhook, Google Sheets, or file system What this workflow does Components & Flow Input Nodes job_search_url: The job listing or search result URL job_role: The title or role being searched for (used in logging/formatting) MCP Client Operations MCP Salary Data Extractor Simulates browser behavior and scrapes salary-related content (if available) MCP Job Description Extractor Extracts full job description as structured Markdown content OpenAI GPT-4o mini Nodes Salary Information Extractor Uses GPT-4o mini to detect, clean, and standardize salary range data (if any) Job Description Refiner Extracts role responsibilities, qualifications, and benefits from unstructured text Company Information Extractor Uses Bright Data MCP and GPT-4o mini to extract the company information Merge Node Combines the refined job description and extracted salary information into a unified JSON response object Aggregate node Aggregates the job description and salary information into a single JSON response object Final Output Handling The output is handled in three different formats depending on your downstream needs: Save to Disk** Output stored with filename including timestamp and job role Google Sheet Update** Adds a new row with job role, salary, summary, and link Webhook Notification** Pushes merged response to an external system Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Modify Input Source Change the job_search_url to point to any job board or aggregator Customize job_role to reflect the type of jobs being analyzed Tweak LLM Prompts (Optional) Refine GPT-4o mini prompts to extract additional fields like benefits, tech stacks, remote eligibility Change Output Format Customize the merged object to output JSON, CSV, or Markdown based on downstream needs Add additional destinations (e.g., Slack, Airtable, Notion) via n8n nodes
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The DNB Company Search & Extract workflow is designed for professionals who need to gather structured business intelligence from Dun & Bradstreet (DNB). It is ideal for: Market Researchers B2B Sales & Lead Generation Experts Business Analysts Investment Analysts AI Developers Building Financial Knowledge Graphs What problem is this workflow solving? Gathering business information from the DNB website usually involves manual browsing, copying company details, and organizing them in spreadsheets. This workflow automates the entire data collection pipeline — from searching DNB via Google, scraping relevant pages, to structuring the data and saving it in usable formats. What this workflow does This workflow performs automated search, scraping, and structured extraction of DNB company profiles using Bright Data’s MCP search agents and OpenAI’s 4o mini model. Here's what it includes: Set Input Fields: Provide search_query and webhook_notification_url. Bright Data MCP Client (Search): Performs Google search for the DNB company URL. Markdown Scrape from DNB: Scrapes the company page using Bright Data and returns it as markdown. OpenAI LLM Extraction: Transforms markdown into clean structured data. Extracts business information (company name, size, address, industry, etc.) Webhook Notification: Sends structured response to your provided webhook. Save to Disk: Persists the structured data locally for logging or auditing. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the Set input fields for search_query and webhook_notification_url. Update the file name and path to persist on disk. How to customize this workflow to your needs Search Engine**: Default is Google, but you can change the MCP client engine to Bing, or Yandex if needed. Company Scope**: Modify search query logic for niche filtering, e.g., "biotech startups site:dnb.com". Structured Fields**: Customize the LLM prompt to extract additional fields like CEO name, revenue, or ratings. Integrations**: Push output to Notion, Airtable, or CRMs like HubSpot using additional n8n nodes. Formatting**: Convert output to PDF or CSV using built-in File and Spreadsheet nodes.
by InfyOm Technologies
✅ What problem does this workflow solve? Automatically monitor multiple websites every 5 minutes, log downtime, notify your team instantly via multiple channels, and track uptime/downtime in a Google Sheet—without relying on expensive monitoring tools. ⚙️ What does this workflow do? Triggers every 5 minutes to monitor website health. Fetches a list of website URLs from a Google Sheet. Checks the status of each website one by one. Sends instant alerts if a website is down (Email, Slack, Telegram, Voice Call). Logs downtime events in Google Sheets. Tracks when websites are back up and updates the log. Sends recovery notifications when a site is live again (Email, Slack, Telegram). 🔧 Setup 📄 Google Sheets Setup Sheet 1: List of website URLs to monitor. Sheet 2: Log to store uptime/downtime records. Sample Format: https://docs.google.com/spreadsheets/d/1_VVpkIvpYQigw5q0KmPXUAC2aV2rk1nRQLQZ7YK2KwY/edit?usp=sharing ✉️ Gmail, Slack & Telegram Setup Connect Gmail, Slack, and Telegram to n8n. Configure each service with proper credentials or OAuth. 📞 Vapi (Voice Call) Setup Create a Vapi account. Generate an API key. Configure API Parameters (vapi_api_key, assistant_id, number, phone_number_id) on VAPI Node. Insert the First Message specified in the Workflow. 🧠 How it Works ⏱ 1. Scheduled Monitoring A Schedule Trigger runs the workflow every 5 minutes. It reads the list of URLs from the Google Sheet and loops through each one. 🌍 2. Website Health Check Each website is pinged to check if it’s online. 🔴 3. If Website is Down: It verifies if a downtime record already exists. If not, it: Adds a new row in the Google Sheet with the timestamp. Sends notifications via: 📧 Email 💬 Slack 📲 Telegram 📞 Voice Call via Vapi 🟢 4. If Website is Back Up: It fetches the matching downtime record. Updates the sheet with: ✅ Uptime timestamp ⏱ Total downtime duration Sends recovery notifications via: 📧 Email 💬 Slack 📲 Telegram (No phone call is made for uptime.) 👤 Who can use it? This is perfect for: 🚀 Startups 👨💻 Freelance Developers 🛠 SaaS Product Owners 🖥 IT/DevOps Teams If you're looking to replace tools like UptimeRobot, Pingdom, or StatusCake, this no-code solution gives you full control, customization, and cost-efficiency.
by Praveena
Why Teachers now spend 3-4 hours per lesson creating materials and resources from scratch. With additional/special needs, this makes it difficult to create additional materials. This is unsustainable and takes their time away from teaching. Tailored for UK teachers but can be expanded globally with prompt and form enhancements. How it works I built a system with three specialized AI agents that create complete lesson packages and automatically uploads a document in Google drive and puts an appointment in calendar to review the document. Features Research agent to pull specific information including special education needs and curriculums. The scoring and assessment agent to generate tailored assessment plans, assignments, grading mechanism based on chosen requirements. The integration agent just provides ideas to expand to other tools. In nfuture there is opportunity to add on Kahoot or other tools to create quizzes. Finally the enriched document is emailed and a calendar invite is sent for review. What you need N8N Any LLM API Key (I used OpenAI) Google drive integration Google calendar integration Modify the email id from XXX@gmail.com to your Email id in email component. Support Watch this video for intro on how it works. Contact me on info@pankstr.com for any queries.
by Dr. Firas
AI-powered WhatsApp booking system with instant SMS confirmations Who is this for? This workflow is designed for solo entrepreneurs, consultants, coaches, clinics, or any business that handles client appointments and wants to automate the entire scheduling experience via WhatsApp — without the need for live agents. What problem is this workflow solving? Responding to inbound messages, collecting booking details, suggesting available times, and sending reminders can be a huge time drain. This workflow eliminates manual handling by: Automating WhatsApp conversations with an AI assistant Booking appointments directly into Cal.com Sending timely SMS reminders before appointments It ensures you never miss a lead or a follow-up — even while you sleep. What this workflow does From a single WhatsApp message, the workflow: Triggers via a WhatsApp webhook Uses GPT-4 to handle conversation flow and qualify the prospect Collects name, email, selected service Calls Cal.com API to fetch available time slots Books the appointment and stores it in Google Sheets Sends a confirmation message via WhatsApp Periodically scans for upcoming appointments Sends SMS reminders to clients 2 hours before their session Setup Connect your Webhook node to a WhatsApp API (e.g., 360dialog, Twilio, or Ultramsg) Add your OpenAI API key for the GPT-4 nodes Configure your Cal.com API key and set your calendar ID Link your Google Sheets with fields like: name, email, date, time, status, reminder_sent Connect your SMS service (e.g., sms77) with API credentials Adjust the schedule in the reminder node as needed How to customize this workflow to your needs Change the language or tone of the AI assistant** by editing the system prompt in the GPT node Filter available time slots** by service, team member, or duration Modify the reminder timing** (e.g., 1 hour before, 24h before, etc.) Add conditional logic** to route users to different booking flows based on their responses Integrate additional CRMs** or notification channels like email or Slack 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube
by Evoort Solutions
🚀 AI-Powered LinkedIn Post Automation 🧩 How It Works This workflow automatically generates LinkedIn posts based on a user-submitted topic, including both content creation and image generation, then publishes the post to LinkedIn. Ideal for marketers, content creators, or businesses looking to streamline their social media activity, without the need for manual post creation. High-Level Workflow: Trigger: The workflow is triggered when a user submits a form with a topic for the LinkedIn post. Data Mapping: The topic is mapped and prepared for the AI model. AI Content Generation: Calls the Google Gemini AI model to generate engaging post content and a visual image prompt. Image Creation: Sends the image prompt to the external API, gen-imager, to generate a professional image matching the topic. Post Creation: Publishes the text and image to LinkedIn, automatically updating the user's feed. ⚙️ Set Up Steps (Quick Overview) 🕐 Estimated Setup Time: ~10–20 minutes Connect Google Gemini: Set up your Google Gemini API credentials to interact with the AI model for content creation. Set Up External Image API: Configure the external image generation API (gen-imager API) for visual creation based on the post prompt. Connect LinkedIn: Set up OAuth2 credentials to authenticate your LinkedIn account and allow publishing posts. Form Submission Setup: Create a simple web form for users to submit the topic for LinkedIn posts. Activate the Workflow: Once everything is connected, activate the workflow. It will trigger automatically upon receiving form submissions. 💡 Important Notes: The flow uses Google Gemini (PaLM) for generating content based on the user's topic. Text to Image: The image generation process involves creating a professional, LinkedIn-appropriate image based on the post’s topic using the **gen-imager API. You can customize the visual elements of the posts and adjust the tone of the generated content based on preferences. 🛠 Detailed Node Breakdown: On Form Submission Trigger: Captures the user-submitted topic and initializes the workflow. Action: Start the process by gathering the topic information. Mapper (Field Mapping) Action: Maps the captured topic to a variable that is passed along for content generation. AI Agent (Content Generation) Action: Calls Google Gemini to generate professional LinkedIn post content and an image prompt based on the submitted topic. Key: Outputs content in a structured form — post text and image prompt. Google Gemini Chat Model Action: AI model that generates actionable insights, engaging copy, and an image prompt for LinkedIn post. Normalizer (Data Cleanup) Action: Cleans the output from the AI model to ensure the content and image prompt are correctly formatted for use in the next steps. Text to Image (Image Generation) Action: Sends the image prompt to the gen-imager API, which returns a custom image based on the post's topic. Decoder (Base64 Decoding) Action: Decodes the image from base64 format for easier uploading to LinkedIn. LinkedIn (Post Creation) Action: Publishes the generated text and image to LinkedIn, automatically creating a polished post for the user’s feed. ⏱ Execution Time Breakdown: Total Estimated Execution Time**: ~15–40 seconds per workflow run. On Form Submission: Instant (Trigger) Mapper (Field Mapping): ~1–2 seconds AI Content Generation: ~5–10 seconds (depending on server load) Text to Image: ~5–15 seconds (depends on external API) LinkedIn Post Creation: ~2–5 seconds 🚀 Ready to Get Started? Let’s get you started with automating your LinkedIn posts! Create your free n8n account and set up the workflow using this link. 📝 Notes & Customizations Form Fields**: Customize the form to gather more specific information for the LinkedIn posts (like audience targeting, post category, etc.). Image API Customization**: Adjust the image generation prompt to fit your brand’s style, or change the color palette as needed. Content Tone**: The tone can be adjusted by modifying the system message sent to Google Gemini for content generation.
by Jimleuk
This n8n template demonstrates one approach to achieve a more natural and less frustration conversations with AI agents by reducing interrupts by predicting the end of user utterances. When we text or chat casually, it's not uncommon to break our sentences over multiple messages or when it comes to voice, break our speech with the odd pause or umms and ahhs. If an agent replies to every message, it's likely to interrupt us before we finish our thoughts and it can get very annoying! Previously, I demonstrated a simple technique for buffering each incoming message by 5 seconds but that approach still suffers in some scenarios when more time is needed. This technique has no arbitrary time limit and instead uses AI to figure out when its the agent's turn based on the user's message, allowing for the user to take all the time they need. How it works Telegram messages are received but no reply is generated for them by default. Instead they are sent to the prediction subworkflow to determine if a reply should be generated. The prediction subworkflow begins by checking Redis for the current user's prediction session state. If this is a new "utterance", it kicks off the "predict end of utterance" loop - the purpose of which is to buffer messages in a smart way! New users message can continue to be accepted by the workflow until enough is collected to allow our prediction classifier to determine the end of the utterance has been reached. The loop is then broken and the buffered chat messages are combined and sent to the AI agent to generate a response and sent to the user via the telegram node. The prediction session state is then deleted to signal the workflow is ready to start again with a new message. How to use This system sits between your preferred chat platform and the AI agent so all you need to do is replace the telegram nodes as required. Where LLM-only prediction isn't working well enough, consider more traditional code-based checking of heuristics to improve the detection. Ideally you'll want a fast but accurate LLM so your user isn't waiting longer than they have to - at time of writing Gemini-2.5-flash-lite was the fastest in testing but keep a look out for smaller and more powerful LLMs in the future. Requirements Gemini for LLM Redis for session management Telegram for chat platform