by Elegant Biztech
Automated QuickBooks Invoice to Custom PDF & Email Tired of the standard, boring invoices from QuickBooks Online? This workflow completely automates the process of creating beautiful, custom-branded PDF invoices and emailing them directly to your clients, saving you time and elevating your brand's professionalism. The moment you create an invoice in QuickBooks, this workflow triggers, fetches all the necessary data, and generates a lavish, multi-page-aware PDF invoice complete with your company logo and signature. Key Features Fully Automated:** Runs instantly when a new invoice is created in QuickBooks. Custom Branding:** Automatically fetches your company logo and signature from a URL to place on the invoice. Modern & Professional Design:** Uses a premium, multi-column HTML template that is clean, easy to read, and far superior to the default QBO templates. Multi-Page Ready:** If an invoice has many line items, the template will intelligently create multiple pages and add a "Page X of Y" footer automatically. Smart Layout:** The totals and summary block are designed to never break across pages, ensuring a professional look no matter the length. Automatic Emailing:** The final PDF is attached to a beautifully formatted email and sent directly to the customer's email address on file. Prerequisites Before you start, you will need a few things: A running n8n instance. A QuickBooks Online account with API access. A running Gotenberg instance. This is a powerful, open-source tool for converting HTML to PDF. This workflow is designed to connect to its API. You can learn more about it here. Publicly accessible URLs for your company logo and signature image (e.g., hosted on your website or a service like Imgur). Setup Guide Follow these steps carefully to configure the workflow for your own use. Nodes that need your attention are marked with a [!!] prefix. Step 1: Configure the QuickBooks Webhook The workflow starts with a webhook. You need to tell QuickBooks to send information to this webhook. Open the [!!] Listen for New QuickBooks Invoice node. You will see a Webhook URL. Copy the Production URL. Go to your QuickBooks Developer dashboard, select your app, and navigate to the Webhooks section. Paste the n8n URL into the Endpoint URL field and select the Invoice event to subscribe to. Step 2: Connect Your QuickBooks Account Open the [!!] Get Invoice Data from QuickBooks node. In the "Credentials" field, select your existing QuickBooks Online credentials or create a new set. Step 3: Add Your Branding Open the [!!] Fetch Company Logo Image node. In the URL field, replace the placeholder with the public URL of your company's logo. Open the [!!] Fetch Company Signature Image node. In the URL field, replace the placeholder with the public URL of your signature image. Step 4: Update the PDF Generation Service Open the [!!] Generate PDF via Gotenberg node. In the URL field, replace the placeholder http://YourGotenBergInstanceURL/... with the real URL of your running Gotenberg instance. Step 5: Configure Your Email Open the [!!] Email PDF Invoice to Customer node. In the "Credentials" field, select your SMTP or email service credentials. Customize the From Email and Subject fields. You can also edit the beautiful HTML email body to match your company's tone of voice. Step 6: Activate Your Workflow You're all set! Save the workflow and activate it using the toggle at the top-right of the screen. Now, when you create a new invoice in QuickBooks, this automation will handle the rest. A Note from the Creator Thank you for using this workflow! I believe that professional and automated invoicing is a cornerstone of a great business. This tool was designed to save you time and help you put your best foot forward with every client interaction. If you have any questions or need assistance, feel free to reach out. Website:** https://www.elegantbiztech.com/ Email:** sales@elegantbiztech.com
by John Pranay Kumar Reddy
✨ Summary Efficiently monitor Kubernetes environments by sending only unique error logs from Grafana Loki to Slack. Reduces alert fatigue while keeping your team informed about critical log events. 🧑💻 Who’s it for DevOps or SRE engineers running EKS/GKE/AKS Anyone using Grafana Loki and Promtail for centralized logging Teams that want Slack alerts but hate alert spam 🔍 What it does This n8n workflow queries your Loki logs every 5 minutes, filters only the critical ones (error, timeout, exception, etc.), removes duplicate alerts within the batch, and sends clean alerts to a Slack channel with full metadata (pod, namespace, node, container, log, timestamp). 🧠 How it works 🕒 Schedule Trigger Every 5 minutes (customizable) 🌐 Loki HTTP Query Pulls logs from the last 10 minutes Keyword match: error, failed, oom, etc. 🧹 Log Parsing Extracts log fields (pod, container, etc.) Skips empty/malformed results 🧠 Deduplication Removes repeated error messages (within query window) 📤 Slack Notification Sends nicely formatted message to Slack ⚙️ Requirements Tool Notes Loki- Exposed internally or externally Slack App- With chat:write OAuth n8n- Cloud or self-hosted 🔧 How to Set It Up Import the JSON file into n8n Update: Loki API URL (e.g., http://loki-gateway.monitoring.svc.cluster.local) Slack Bearer Token (via credentials) Target Slack channel (e.g., #k8s-alerts) (Optional) Change keywords in the query regex Activate the workflow Ensure n8n pod/container is having access to your kubernetes cluster/pods/namespaces 🛠 How to Customize Want more or fewer keywords? Adjust the regex in the Query Loki for Error Logs node. Need to increase deduplication logic? Enhance the Remove Duplicate Alerts node. Want 5-log summaries every 5 min? Fork this and add a Batch + Slack group sender. Grafana Loki logs to Slack Output
by Audun
Who is this for? Security professionals Developers Individuals interested in data breach awareness Use Case Automated monitoring for new breaches Proactive identity protection Demonstration of simple cache mechanism What this workflow does Checks the Have I Been Pwned API every 15 minutes for the latest breaches. Compares new breach data against previously notified breaches. Demonstrates a simple cache mechanism to track previously seen breaches. How the Cache Functionality Works Read from Cache**: Retrieves the last known breach from cache.json to avoid redundant alerts for the same breach. Compare Against Current Breach**: The workflow checks if the latest fetched breach differs from the cached one. Update the Cache**: If a new breach is detected, it updates cache.json with the latest breach data. Setup instructions The endpoint used in this workflow does not require an API key. Add your desired alert mechanism in the red box attached to the New breach node. How to customize this workflow to your needs Modify Notification Settings**: Tailor where alerts are sent (email, Slack, etc.). Add the desired node after the New breach node. This node contains all the data from the breach so it is eaisily available. You can choose from a variety of n8n nodes to send alerts when a new breach is detected. Below are a few common options you might consider adding after the New breach node: Email Node What it does: Sends an email notification to one or more recipients. Use case: Great for simple alerts to your inbox or a team distribution list. Customization: You can include breach details in the subject or body of the email, using data from the New breach node. Slack Node What it does: Sends a message to a Slack channel or user. Use case: Perfect for real-time alerts to your team in Slack. Customization: You can post breach details directly in a channel or DM. You can also format the message (bold, code blocks, etc.). Microsoft Teams Node What it does: Sends a message to a Teams channel. Use case: For organizations that use Microsoft Teams for communication. Customization: Similar to Slack, you can customize the message content and include all relevant breach information. Discord Node What it does: Sends an alert message to a Discord channel. Use case: Useful for teams or communities that coordinate via Discord. Customization: Add formatted messages with breach details for easy viewing. Telegram Node What it does: Sends messages to a Telegram chat or group. Use case: Good for mobile notifications and fast alerts. Customization: You can include breach summaries or detailed information, and even use bots to automate this. Webhook Node (as a sender) What it does: Sends breach data to another service via a webhook. Use case: If you have an external system or app that handles alerts, you can push the data directly to it. Customization: Send JSON payloads with detailed breach information to trigger actions in other systems. SMS Nodes (like Twilio) What it does: Sends an SMS notification to one or more phone numbers. Use case: For urgent alerts that need to be seen immediately. Customization: Keep messages concise, including key breach details like the time, type of breach, and affected system. Adjust Check Frequency**: Change the interval in the Schedule Trigger node (e.g., hourly or daily).
by Dr. Firas
AI-Powered HR Workflow: CV Analysis and Evaluation from Gmail to Sheets Who is this for? This workflow is designed for HR professionals, recruiters, startup founders, and operations teams who receive candidate resumes by email and want to automate the evaluation process using AI. It's ideal for teams that receive high volumes of applications and want to streamline screening without sacrificing quality. What problem is this workflow solving? Manually reviewing every resume is time-consuming, inconsistent, and often inefficient. This workflow automates the initial screening process by: Extracting resume data directly from incoming emails Analyzing resumes using GPT-4 to evaluate candidate fit Saving scores and notes in Google Sheets for easy filtering It helps teams qualify candidates faster while staying organized. What this workflow does Detects when a new email with a CV is received (Gmail) Filters out non-relevant messages using an AI classifier Extracts the resume text (PDF parsing) Uploads the original file to Google Drive Retrieves job offer details from a connected Google Sheet Uses GPT-4 to evaluate the candidate’s fit for the job Parses the AI output to extract the candidate's score Logs the results into a central Google Sheet Sends a confirmation email to the applicant Setup Install n8n self-hosted Add your OpenAI API Key in the AI nodes Enable the following APIs in your Google Cloud Console: Gmail API Google Drive API Google Sheets API Create OAuth credentials and connect them in n8n Configure your Gmail trigger to watch the inbox receiving CVs Create a Google Sheet with columns like: Candidate, Score, Job, Status, etc. How to customize this workflow to your needs Adjust the AI scoring prompt to match your company’s hiring criteria Add new columns to the Google Sheet for additional metadata Include Slack or email notifications for each qualified candidate Add multiple job profiles and route candidates accordingly Add a Telegram or WhatsApp step to notify HR in real time 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube
by HoangSP
Name: AI-Powered Research Agent using Perplexity Sonar Description: This workflow acts as an AI-powered research assistant using the Perplexity Sonar model. When triggered by another workflow, it sends a user-defined prompt to the Perplexity API to retrieve up-to-date search results. The response is then parsed into a clean format for downstream processing. How it Works: Trigger: Activated from another workflow via Execute Workflow Trigger. Prompt Setup: Sets a system role message and user query dynamically. API Call: Sends a POST request to Perplexity's /chat/completions endpoint with your credentials. Response Handling: Extracts the message content from the API response. Output: Returns the result, ready for display or further processing. Requirements: A Perplexity AI API Key Set up authentication via Header Auth with Bearer token Ensure your account allows outbound HTTP requests in n8n Customization Tips: Modify the system prompt to suit your research domain Chain this workflow with other automation like blog creation, summaries, etc. Replace the output handling logic to fit into Google Sheets, Notion, or Telegram
by Stefan
Track n8n Node Definitions from GitHub and Export to Google Sheets Overview This workflow automatically retrieves and processes metadata from the official n8n GitHub repository, filters all available .node.json files, parses their structure, and appends structured information into a Google Sheet. Perfect for developers, community managers, and technical writers who need to maintain up-to-date information about n8n's evolving node ecosystem. Setup Instructions Prerequisites Before setting up this workflow, ensure you have: A GitHub account with API access A Google account with Google Sheets access An active n8n instance (cloud or self-hosted) Step 1: GitHub API Configuration Navigate to GitHub Settings → Developer Settings → Personal Access Tokens Generate a new token with public_repo permissions Copy the generated token and store it securely In n8n, create a new "GitHub API" credential Paste your token in the credential configuration and save Step 2: Google Sheets Setup Create a new Google Sheets document Set up the following column headers in the first row: node (Column A) - Node identifier/name nodeVersion (Column B) - Version of the node codexVersion (Column C) - Codex version number categories (Column D) - Node categories credentialDocumentation (Column E) - Credential documentation URL primaryDocumentation (Column F) - Primary documentation URL Note down the Google Sheets document ID from the URL Configure Google Sheets OAuth2 credentials in n8n Step 3: Workflow Configuration Import the workflow into your n8n instance Update the following placeholder values: Replace YOUR_GOOGLE_SHEETS_DOCUMENT_ID with your actual document ID Replace YOUR_WEBHOOK_ID if using webhook functionality Configure the GitHub API credentials in the HTTP Request nodes Set up Google Sheets credentials in the Google Sheets nodes Share your Google Sheets document with the email address associated with your Google OAuth2 credentials Grant "Editor" permissions to allow the workflow to write data Google Sheets Template Details The workflow creates a structured dataset with these columns: node**: Node identifier (e.g., n8n-nodes-base.slack) nodeVersion**: Version of the node (e.g., 1.0.0) codexVersion**: Codex version number (e.g., 1.0.0) categories**: Node categories (e.g., Communication, Productivity) credentialDocumentation**: URL to credential documentation primaryDocumentation**: URL to primary node documentation Customization Options Modifying Data Extraction You can customize the "Format Data" node to extract additional fields: Add new assignments in the Set node Modify the column mapping in the Google Sheets node Update your spreadsheet headers accordingly Changing Update Frequency To run this workflow on a schedule: Replace the Manual Trigger with a Cron node Set your desired schedule (e.g., daily, weekly) Configure appropriate timing to avoid API rate limits Adding Filters Customize the "Filter Node Files" code node to: Filter specific node types Include/exclude certain categories Process only recently updated nodes Features Fetches all node definitions from the n8n-io/n8n repository Filters for .node.json files only Downloads and parses metadata automatically Extracts key fields like node names, versions, categories, and documentation URLs Appends structured data to Google Sheets with batch processing Includes error handling and retry mechanisms Clears existing data before appending new information for fresh results Use Cases This workflow is ideal for: Track changes in official n8n node definitions over time Audit node categories and documentation links for completeness Build custom dashboards from node metadata Community management and documentation maintenance Integration planning and compatibility analysis
by Miha
Combine Tech News in a Personalized Weekly Newsletter This n8n template automates the collection, storage, and summarization of technology news from top sites, turning it into a concise, personalized weekly newsletter. If you like staying informed but want to reduce daily distractions, this workflow is perfect for you. It leverages RSS feeds, vector databases, and LLMs to read and curate tech content on your behalf—so you only receive what truly matters. How it works A daily scheduled trigger fetches articles from multiple popular tech RSS feeds like Wired, TechCrunch, and The Verge. Fetched articles are: Normalized to extract titles, summaries, and publish dates. Converted to vector embeddings via OpenAI and stored in memory for fast semantic querying. A weekly scheduled trigger activates the AI summarization flow: The AI is provided with your interests (e.g., AI, games, gadgets) and the desired number of items (e.g., 15). It queries the vector store to retrieve relevant articles and summarizes the most newsworthy stories. The summary is converted into a clean, email-friendly format and sent to your inbox. How to use Connect your OpenAI and Gmail accounts to n8n. Customize the list of RSS feeds in the “Set Tech News RSS Feeds” node. Update your interests and number of desired news items in the “Your Topics of Interest” node. Activate the workflow and let the automation run on schedule. Requirements OpenAI** credentials for embeddings and summarization Gmail** (or another email service) for sending the newsletter Customizing this workflow Want to use different sources? Swap in your own RSS feeds, or use an API-based news aggregator. Replace the in-memory vector store with Pinecone, Weaviate, or another persistent vector DB for longer-term storage. Adjust the agent's summarization style to suit internal updates, industry-specific briefings, or even entertainment recaps. Prefer chat over email? Replace the email node with a Telegram bot to receive your personalized tech newsletter directly in a Telegram chat.
by Budi SJ
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🎯 Purpose This workflow helps you automatically monitor stock related news, extract the main content, summarize it using a LLM (via OpenRouter), and send real time alerts to Telegram and store them in Google Sheets. ⚙️ How It Works Trigger A Cron node triggers the workflow every 15 minutes (adjustable). RSS Feed node checks latest articles from Google Alerts RSS. The workflow filters duplicates using Google Sheets as a log. The article URL is sent to Jina AI Readability API to extract the main body text. The content is summarized using a model from OpenRouter (e.g., Gemini, Claude, GPT-4). You can customize the prompt to suit your tone and analysis needs. The result is appended to a Google Sheets file. Sends the title, summary, and reccomendation to Telegram chat. 🧾 Google Sheets Template Create a Google Sheet using this template: Stock Alert 🧰 Requirements Telegram Bot + your Chat ID OpenRouter account and API key Jina AI account for content extraction Google Account with access to Google Sheets Google Alerts RSS feed 🛠 Setup Instructions Install required credentials: Add OpenRouter API key to n8n credentials. Add Telegram Bot Token and Chat ID. Add Google Sheets credentials. Add Jina AI credentials. Create or copy the Google Sheet using the link above. Go to Google Alerts, create alerts, and copy the RSS feed URL. Replace placeholder API keys and URLs. Adjust Telegram Chat ID. 🔐 Security Note All sensitive credentials (e.g., API keys, personal chat IDs) have been removed from this template. Please replace them using the n8n credentials manager before activating the workflow.
by Mary Newhauser
Build a Weekly AI Trend Alerter with arXiv and Weaviate Ditch the endless scroll for AI trends. Meet Archi, your personal AI research assistant that hits you up once a week with everyone you need to know. 🧑🏽🔬 This workflow scrapes AI and machine learning article abstracts from arXiv, enriches them with topic categories using a LLM, and embeds them in a Weaviate vector store. The vector store is then used as a tool for agentic RAG to write a concise, easy-to-read summary of the week in AI research. The final output is a short, weekly email sent to the address of your choice that summarizes key AI research trends and future research directions, with links directly to the most interesting and impactful arXiv papers of the week. Who it's for This workflow is for anyone who can't keep up with all the latest AI advances. Coding skills are not required. How it works This is a contiguous workflow that can be summarized in two main parts: a data pipeline that fetches and embeds articles in Weaviate, and an agentic workflow that generates a weekly email summary. Part 1: Automatically fetch newly published articles on a weekly basis Fetch article abstracts (and metadata) from arXiv's free API Pre-process abstract data Enrich each article with a primary topic, secondary topics, and estimated potential impact of the research using a LLM Post-process data Insert data and embeddings into Weaviate Part 2: Use an AI Agent and Weaviate to generate a weekly summary email Add Weaviate as a Tool to an AI agent node Query Weaviate, agentically, to generate a report on the most important research trends of the week Post-process data Send the summary via email Prerequisites An existing Weaviate cluster. You can view instructions for setting up a local cluster with Docker here or a Weaviate Cloud cluster here. API keys to generate embeddings and power chat models. We use a combination of OpenRouter and OpenAI models. Feel free to switch out the models as you like. An email address with STMP privileges. This is the address the email will come from. In this demo we use a personal Gmail address. You can create a new credential to link a STMP Account using these instructions. Self-hosted n8n instance. See this video for how to get set up in just three minutes. How to run the workflow Go through the prerequisites, creating a Weaviate cluster (can be local or cloud), downloading self-hosted n8n, creating STMP privileges for your email account, and adding your API keys and other credentials. Select the embedding and chat models you'd like to use. Enter the email addresses you want to send the email from and to. Let it rip. Workflow output The output for this workflow is a weekly email that summarizes key research trends and future research directions based on AI and ML papers published on arXiv. Here's an example of a summary email: Hey there, Here's a quick rundown of the key trends in Machine Learning research from the past week. * Key Research Trends This Week* This week saw significant advancements in retrieval-augmented systems, foundation models for specialized domains, and techniques balancing efficiency with performance. Advanced RAG Architectures**: Researchers are developing sophisticated RAG frameworks that go beyond simple document retrieval, with AdaPCR introducing passage combination retrieval and UrbanMind proposing a framework for urban intelligence with multilevel optimization. Foundation Models for Tabular Data**: The Real-TabPFN shows that targeted continued pre-training on real-world datasets can significantly boost the performance of foundation models for tabular data, outperforming models trained on broader, potentially noisier datasets. Efficiency-Focused Techniques**: Researchers are developing resourceful methods that maintain performance without expensive computations, like logit reweighting for topic-focused summarization and strategic querying for privacy-preserving personalization. * Future Research Directions* Based on current trends, we expect to see the following developments in the near future: Explainable RAG Systems**: Following the source attribution work in RAG systems, we can expect more research into making complex retrieval systems transparent and explainable for users. Cross-Domain and Cross-Modal Fusion**: The promising performance of vision-language and code-specialized LLMs in retrieval tasks points toward unified retrievers capable of handling text, code, images, and multimodal content. Data-Centric Synthetic Generation**: As shown by work on synthetic relational tabular data, we'll likely see more sophisticated approaches to generating high-quality synthetic data for pre-training foundation models in specialized domains. This week highlights how researchers are making AI more efficient, explainable, and applicable to specialized domains. Look out for more developments in RAG systems, tabular foundation models, and privacy-preserving AI techniques in the coming weeks. Until next week, Archi Want to make it better? Feel free to tweak, build on, or completely reconfigure this workflow. If you come up with something cool, let us know and we might just share it with our community! 💚
by HoangSP
SEO Blog Generator with GPT-4o, Perplexity, and Telegram Integration This workflow helps you automatically generate SEO-optimized blog posts using Perplexity.ai, OpenAI GPT-4o, and optionally Telegram for interaction. 🚀 Features 🧠 Topic research via Perplexity sub-workflow ✍️ AI-written blog post generated with GPT-4o 📊 Structured output with metadata: title, slug, meta description 📩 Integration with Telegram to trigger workflows or receive outputs (optional) ⚙️ Requirements ✅ OpenAI API Key (GPT-4o or GPT-3.5) ✅ Perplexity API Key (with access to /chat/completions) ✅ (Optional) Telegram Bot Token and webhook setup 🛠 Setup Instructions Credentials: Add your OpenAI credentials (openAiApi) Add your Perplexity credentials under httpHeaderAuth Optional: Setup Telegram credentials under telegramApi Inputs: Use the Form Trigger or Telegram input node to send a Research Query Subworkflow: Make sure to import and activate the subworkflow Perplexity_Searcher to fetch recent search results Customization: Edit prompt texts inside the Blog Content Generator and Metadata Generator to change writing style or target industry Add or remove output nodes like Google Sheets, Notion, etc. 📦 Output Format The final blog post includes: ✅ Blog content (1500-2000 words) ✅ Metadata: title, slug, and meta description ✅ Extracted summary in JSON ✅ Delivered to Telegram (if connected) Need help? Reach out on the n8n community forum
by Mihai Farcas
This workflow implements a Retrieval Augmented Generation (RAG) chatbot that answers employee questions based on company documents stored in Google Drive. It automatically indexes new or updated documents in a Pinecone vector database, allowing the chatbot to provide accurate and up-to-date information. The workflow uses Google's Gemini AI for both embeddings and response generation. How it works The workflow uses two Google Drive Trigger nodes: one for detecting new files added to a specified Google Drive folder, and another for detecting file updates in that same folder. Automated Indexing: When a new or updated document is detected The Google Drive node downloads the file. The Default Data Loader node loads the document content. The Recursive Character Text Splitter node breaks the document into smaller text chunks. The Embeddings Google Gemini node generates embeddings for each text chunk using the text-embedding-004 model. The Pinecone Vector Store node indexes the text chunks and their embeddings in a specified Pinecone index. 7.The Chat Trigger node receives user questions through a chat interface. The user's question is passed to an AI Agent node. The AI Agent node uses a Vector Store Tool node, linked to a Pinecone Vector Store node in query mode, to retrieve relevant text chunks from Pinecone based on the user's question. The AI Agent sends the retrieved information and the user's question to the Google Gemini Chat Model (gemini-pro). The Google Gemini Chat Model generates a comprehensive and informative answer based on the retrieved documents. A Window Buffer Memory node connected to the AI Agent provides short-term memory, allowing for more natural and context-aware conversations. Set up steps Google Cloud Project and Vertex AI API: Create a Google Cloud project. Enable the Vertex AI API for your project. Google AI API Key: Obtain a Google AI API key from Google AI Studio. Pinecone Account: Create a free account on the Pinecone website. Obtain your API key from your Pinecone dashboard. Create an index named company-files in your Pinecone project. Google Drive: Create a dedicated folder in your Google Drive where company documents will be stored. Credentials in n8n: Configure credentials in your n8n environment for: Google Drive OAuth2 Google Gemini(PaLM) Api (using your Google AI API key) Pinecone API (using your Pinecone API key) Import the Workflow: Import this workflow into your n8n instance. Configure the Workflow: Update both Google Drive Trigger nodes to watch the specific folder you created in your Google Drive. Configure the Pinecone Vector Store nodes to use your company-files index.
by Robert Breen
Extract Local Business Contacts with Google Sheets, SerpAPI & GPT‑4o Status: Ready for Use ✅ Disclaimer: This workflow relies on community nodes that are not part of n8n’s core package. Install the following from n8n → Community Nodes before running: n8n-nodes-langchain** n8n-nodes-openai** (Structured Output Parser) n8n-nodes-apify** 📝 Description This n8n workflow automates discovery of local‑business contact details by search term and location, then enriches the results with publicly listed email addresses using GPT‑4o AI. 🔑 Key Features 🔗 Google Sheets Integration Reads search terms and locations from a Google Sheet. Processes only rows that are not marked Complete, preventing duplicates. 🗺️ Google Maps Search via SerpAPI Queries Google Maps through SerpAPI for every search‑term‑and‑location pair. Retrieves the following fields: business name, website, street address, and phone number. 🧠 Website Scraping & Email Extraction Scrapes the business homepage content with Apify’s Fast Website Content Crawler. Sends the scraped HTML to a GPT‑4o AI Agent. Extracts any publicly listed email address. Returns a clean, structured JSON object for downstream use. 💾 Data Storage & Tracking Writes every result to a Results tab in the same Google Sheet. Marks the corresponding row in the Searches tab as Complete once finished. 🧱 Extensible Design The workflow uses modular sub‑workflows and AI agents. You can easily extend it to add: Phone‑number verification with Twilio Social‑media enrichment with Clearbit Exports to HubSpot, Salesforce, Airtable, PostgreSQL, or CSV files 📄 Google Sheet Setup Create a Searches tab with these exact columns (one header row): Search | Area | Area Name | Complete Create a results tab with these columns title | website | address | phone | Search | Search Name | Area | email (Manual Entry) ⚙️ Prerequisites Google Cloud Project with Google Sheets API and Google Drive API enabled SerpAPI account (free trial or paid) – obtain an API key Apify account (free trial or paid) with the Fast Website Content Crawler actor installed OpenAI account with an API key that can access GPT‑4o models 🚀 Setup Instructions Copy the Google Sheet Make a personal copy of the template sheet. Ensure the tab names are Searches and Results. https://docs.google.com/spreadsheets/d/1QgcVMlXRlM_5ZFFUHr6bVK-93Tzia9XseTX03ZYnowI/edit?usp=sharing Configure Google Sheets nodes in n8n Open the workflow. Update the nodes Extract Search Terms and Save Emails to Sheet to point at your copied sheet. Authenticate using Google OAuth2 credentials that have access to the sheet. Add SerpAPI credentials Sign in at <https://serpapi.com>. Copy your API key. In the Search Google Maps node, create a new credential and paste the key. Set up Apify Sign up at <https://apify.com>. Add the Fast Website Content Crawler actor to your account. In the Scrape Web Page HTTP node, append ?token=YOUR_API_KEY to the actor URL. Add your OpenAI API key Go to <https://platform.openai.com>. Generate an API key. Add it to the AI Agent and OpenAI Chat Model node credentials. ✅ Running the Workflow Click Execute Workflow in n8n. For each unprocessed row in the Searches tab, the automation will: Retrieve business information from Google Maps via SerpAPI. Scrape the business website using Apify. Use GPT‑4o to extract a public email address. Write all collected data to the Results tab. Mark the original row as Complete. 🧩 Example Use Cases Build highly targeted lead lists for sales and marketing outreach. Compile local business directories for regional websites or apps. Automate contact‑information collection for lead‑generation campaigns and reduce manual data entry. 🤝 Connect with Me Description I’m Robert Breen, founder of Ynteractive — a consulting firm that helps businesses automate operations using n8n, AI agents, and custom workflows. I’ve helped clients build everything from intelligent chatbots to complex sales automations, and I’m always excited to collaborate or support new projects. If you found this workflow helpful or want to talk through an idea, I’d love to hear from you. Links 🌐 Website: https://www.ynteractive.com 📺 YouTube: @ynteractivetraining 💼 LinkedIn: https://www.linkedin.com/in/robert-breen 📬 Email: rbreen@ynteractive.com