by PollupAI
This n8n workflow automates the import of your Google Keep notes into a structured Google Sheet, using Google Drive, OpenAI for AI-powered processing, and JSON file extraction. It's perfect for users who want to turn exported Keep notes into a searchable, filterable spreadsheet – optionally enhanced by AI summarization or transformation. Who is this for? Researchers, knowledge workers, and digital minimalists who rely on Google Keep and want to better organize or analyze their notes. Anyone who regularly exports Google Keep notes and wants a clean, automated workflow to store them in Google Sheets. Users looking to apply AI to process, summarize, or extract insights from raw notes. What problem is this workflow solving? Exporting Google Keep notes via Google Takeout gives you unstructured .json files that are hard to read and manage. This workflow solves that by: Filtering relevant .json files Extracting note content (Optionally) applying AI to analyze or summarize each note Writing the result into a structured Google Sheet What this workflow does Google Drive Search: Looks for .json files inside a specified "Keep" folder. Loop: Processes files in batches of 10. File Filtering: Filters by .json extension. Download + Extract: Downloads each file and extracts note content from JSON. Optional Filtering: Only keeps non-archived notes or those meeting content criteria. AI Processing (optional): Uses OpenAI to summarize or transform the note content. Prepare for Export: Maps note fields to be written. Google Sheets: Appends or updates the target sheet with the note data. Setup Export your Google Keep notes using Google Takeout: Deselect all, then choose only Google Keep. Choose “Send download link via email”. Unzip the downloaded archive and upload the .json files to your Google Drive. Connect Google Drive, OpenAI, and Google Sheets in n8n. Set the correct folder path for your notes in the “Search in ‘Keep’ folder” node. Point the Google Sheet node to your spreadsheet How to customize this workflow to your needs Skip AI processing: If you don't need summaries or transformations, remove or disable the OpenAI Chat Model node. Filter criteria: Customize the Filter node to extract only recent notes, or those containing specific keywords. AI prompts: Edit the Tools Agent or Chat Model node to instruct the AI to summarize, extract tasks, categorize notes, etc. Field mapping: Adjust the “Set fields for export” node to control what gets written to the spreadsheet. Use this template to build a powerful knowledge extraction tool from your Google Keep archive – ideal for backups, audits, or data-driven insights.
by Ranjan Dailata
Who this is for The Google Trend Data Extract & Summarization workflow is ideal for trend researchers, digital marketers, content strategists, and AI developers who want to automate the extraction, summarization, and distribution of Google Trends data. This end-to-end solution helps transform trend signals into human-readable insights and delivers them across multiple channels. It is built for: Market Researchers** - Tracking trends by topic or region Content Strategists** - Identifying content opportunities from trending data SEO Analysts** - Monitoring search volume and shifts in keyword popularity Growth Hackers** - Reacting quickly to real-time search behavior AI & Automation Engineers** - Creating automated trend monitoring systems What problem is this workflow solving? Google Trends data can provide rich insights into user interests, but the raw data is not always structured or easily interpretable at scale. Manually extracting, cleaning, and summarizing trends from multiple regions or categories is time-consuming. This workflow solves the following problems: Automates the conversion of markdown or scraped HTML into clean textual input Transforms unstructured data into structured format ready for processing Uses AI summarization to generate easy-to-read insights from Google Trends Distributes summaries via email and webhook notifications Persists responses to disk for archiving, auditing, or future analytics What this workflow does Receives input: Sets an URL for the data extraction and analysis. Uses Bright Data’s Web Unlocker to extract content from relevant site. Markdown to Textual Data Extractor: Converts markdown content into plaintext using n8n’s Function or Markdown nodes Structured Data Extract: Parses the plaintext into structured JSON suitable for AI processing Summarize Google Trends: Sends structured data to Google Gemini with a summarization prompt to extract key takeaways Send Summary via Gmail: Composes an email with the AI-generated summary and sends it to a designated recipient Persist to Disk: Writes the AI structured data to disk Webhook Notification: Sends the summarized response to an external system (e.g., Slack, Notion, Zapier) using a webhook Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Set URL and Bright Data Zone for setting the brand content URL and the Bright Data Zone name. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Update Source : Update the workflow input to read from Google Sheet or Airbase etc. Gemini Prompt Tuning : Customize prompts to extract summaries like: Summarize the most significant trend shifts Generate content ideas from the trending search topics Email Personalization : Configure Gmail node to: Use dynamic subject lines like: Weekly Google Trends Summary – {{date}} Send to multiple stakeholders or mailing lists File Storage Customization : Save with timestamps, e.g., trends_summary_2025-04-29.json Extend to S3 or cloud drive integrations Webhook Use Cases : Send summary to: Internal dashboards Slack channels Automation tools like Make, Zapier etc.
by Immanuel
Automated Raw Materials Inventory Management with Google Sheets, Supabase, and Gmail using n8n Webhooks Description What Problem Does This Solve? 🛠️ This workflow automates raw materials inventory management for businesses, eliminating manual stock updates, delayed material issue approvals, and missed low stock alerts. It ensures real-time stock tracking, streamlined approvals, and timely notifications. Target audience: Small to medium-sized businesses, inventory managers, and n8n users familiar with Google Sheets, Supabase, and Gmail integrations. What Does It Do? 🌟 Receives raw material data and issue requests via form submissions. Updates stock levels in Google Sheets and Supabase. Manages approvals for material issue requests with email notifications. Detects low stock levels and sends alerts via Gmail. Maintains data consistency across Google Sheets and Supabase. Key Features Real-time stock updates from form submissions. Automated approval process for material issuance. Low stock detection with Gmail notifications. Dual storage in Google Sheets and Supabase for redundancy. Error handling for robust data validation. Setup Instructions Prerequisites n8n Instance**: Self-hosted or cloud n8n instance. API Credentials**: Google Sheets API: Credentials from Google Cloud Console with Sheets scope, stored in n8n credentials. Supabase API: API key and URL from Supabase project, stored in n8n credentials (do not hardcode in nodes). Gmail API: Credentials from Google Cloud Console with Gmail scope. Forms**: A form (e.g., Google Form) to submit raw material receipts and issue requests, configured to send data to n8n webhooks. Installation Steps Import the Workflow: Copy the workflow JSON from the “Template Code” section (to be provided). Import it into n8n via “Import from File” or “Import from URL”. Configure Credentials: Add API credentials in n8n’s Credentials section for Google Sheets, Supabase, and Gmail. Assign credentials to respective nodes. For example: In the Append Raw Materials node, use Google Sheets credentials: {{ $credentials.GoogleSheets }}. In the Current Stock Update node, use Supabase credentials: {{ $credentials.Supabase }}. In the Send Low Stock Email Alert node, use Gmail credentials. Set Up Nodes: Webhook Nodes (Receive Raw Materials Webhook, Receive Material Issue Webhook): Configure webhook URLs and link them to your form submissions. Approval Email (Send Approval Request): Customize the HTML email template if needed. Low Stock Alerts (Send Low Stock Email Alert, Send Low Stock Email After Issue): Configure recipient email addresses. Test the Workflow: Submit a test form for raw material receipt and verify stock updates in Google Sheets/Supabase. Submit a material issue request, approve/reject it, and confirm stock updates and notifications. How It Works High-Level Steps Receive Raw Materials: Processes form submissions for raw material receipts. Update Stock: Updates stock levels in Google Sheets and Supabase. Handle Issue Requests: Processes material issue requests via forms. Manage Approvals: Sends approval requests and processes decisions. Monitor Stock Levels: Detects low stock and sends Gmail alerts. Detailed Descriptions Detailed node descriptions are available in the sticky notes within the workflow screenshot (to be provided). Below is a summary of key actions. Node Names and Actions Raw Materials Receiving and Stock Update Receive Raw Materials Webhook**: Receives raw material data from a form submission. Standardize Raw Material Data**: Maps form data into a consistent format. Calculate Total Price**: Computes Total Price (Quantity Received * Unit Price). Append Raw Materials**: Records receipt in Google Sheets. Check Quantity Received Validity**: Ensures Quantity Received is valid. Lookup Existing Stock**: Retrieves current stock for the Product ID. Check If Product Exists**: Branches based on Product ID existence. Calculate Updated Current Stock**: Adds Quantity Received to stock (True branch). Update Current Stock**: Updates stock in Google Sheets (True branch). Retrieve Updated Stock for Check**: Retrieves updated stock for low stock check. Detect Low Stock Level**: Flags if stock is below minimum. Trigger Low Stock Alert**: Triggers email if stock is low. Send Low Stock Email Alert**: Sends low stock alert via Gmail. Add New Product to Stock**: Adds new product to stock (False branch). Current Stock Update**: Updates Supabase Current Stock table. New Row Current Stock**: Inserts new product into Supabase. Search Current Stock**: Retrieves Supabase stock records. New Record Raw**: Inserts raw material record into Supabase. Format Response**: Removes duplicates from Supabase response. Combine Stock Update Branches**: Merges branches for existing/new products. Material Issue Request and Approval Receive Material Issue Webhook**: Receives issue request from a form submission. Standardize Data**: Normalizes request data and adds Approval Link. Validate Issue Request Data**: Ensures Quantity Requested is valid. Verify Requested Quantity**: Validates Product ID and Submission ID. Append Material Request**: Records request in Google Sheets. Check Available Stock for Issue**: Retrieves current stock for the request. Prepare Approval**: Checks stock sufficiency for the request. Send Approval Request**: Emails approver with Approve/Reject options. Receive Approval Response**: Captures approver’s decision via webhook. Format Approval Response**: Processes approval data with Approval Date. Verify Approval Data**: Validates the approval response. Retrieve Issue Request Details**: Retrieves original request from Google Sheets. Process Approval Decision**: Branches based on approval action. Get Stock for Issue Update**: Retrieves stock before update (Approved). Deduct Issued Stock**: Reduces stock by Approved Quantity (Approved). Update Stock After Issue**: Updates stock in Google Sheets (Approved). Retrieve Stock After Issue**: Retrieves updated stock for low stock check. Detect Low Stock After Issue**: Flags low stock after issuance. Trigger Low Stock Alert After Issue**: Triggers email if stock is low. Send Low Stock Email After Issue**: Sends low stock alert via Gmail. Update Issue Request Status**: Updates request status (Approved/Rejected). Combine Stock Lookup Results**: Merges stock lookup branches. Create Record Issue**: Inserts issue request into Supabase. Search Stock by Product ID**: Retrieves Supabase stock records. Issues Table Update**: Updates Supabase Materials Issued table. Update Current Stock**: Updates Supabase stock after issuance. Combine Issue Lookup Branches**: Merges issue lookup branches. Search Issue by Submission ID**: Retrieves Supabase issue records. Customization Tips Expand Storage Options **: Add nodes to store data in other databases (e.g., Airtable) alongside Google Sheets and Supabase. Modify Approval Email **: Update the Send Approval Request node to customize the HTML email template (e.g., adjust styling or add branding). Alternative Notifications **: Add nodes to send low stock alerts via other platforms (e.g., Slack or Telegram). Adjust Low Stock Threshold **: Modify the Detect Low Stock Level node to change the Minimum Stock Level (default: 50).!
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Automated Resume Job Matching Engine is an intelligent workflow designed for career platforms, HR tech startups, recruiting firms, and AI developers who want to streamline job-resume matching using real-time data from LinkedIn and job boards. This workflow is tailored for: HR Tech Founders** - Building next-gen recruiting products Recruiters & Talent Sourcers** - Seeking automated candidate-job fit evaluation Job Boards & Portals** - Enriching user experience with AI-driven job recommendations Career Coaches & Resume Writers** - Offering personalized job fit analysis AI Developers** - Automating large-scale matching tasks using LinkedIn and job data What problem is this workflow solving? Manually matching a resume to job description is time-consuming, biased, and inefficient. Additionally, accessing live job postings and candidate profiles requires overcoming web scraping limitations. This workflow solves: Automated LinkedIn profile and job post data extraction using Bright Data MCP infrastructure Semantic matching between job requirements and candidate resume using OpenAI 4o mini Pagination handling for high-volume job data End-to-end automation from scraping to delivery via webhook and persisting the job matched response to disk What this workflow does Bright Data MCP for Job Data Extraction Uses Bright Data MCP Clients to extract multiple job listings (supports pagination) Pulls job data from LinkedIn with the pre-defined filtering criteria's OpenAI 4o mini LLM Matching Engine Extracts paginated job data from the Bright Data MCP extracted info via the MCP scrape_as_html tool. Extracts textual job description information via the scraped job information by leveraging the Bright Data MCP scrape_as_html tool. AI Job Matching node handles the job description and the candidate resume compare to generate match scores with insights Data Delivery Sends final match report to a Webhook Notification endpoint Persistence of AI matched job response to disk Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the Set input fields for candidate resume, keywords and other filtering criteria's. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Target Different Job Boards Set input fields with the sites like Indeed, ZipRecruiter, or Monster Customize Matching Criteria Adjust the prompt inside the AI Job Match node Include scoring metrics like skills match %, experience relevance, or cultural fit Automate Scheduling Use a Cron Node to periodically check for new jobs matching a profile Set triggers based on webhook or input form submissions Output Customization Add Markdown/PDF formatting for report summaries Extend with Google Sheets export for internal analytics Enhance Data Security Mask personal info before sending to external endpoints
by Ranjan Dailata
The Scrape and Analyze Amazon Product Info with Decodo + OpenAI workflow automates the process of extracting product information from an Amazon product page and transforming it into meaningful insights. The workflow then uses OpenAI to generate descriptive summaries, competitive positioning insights, and structured analytical output based on the extracted information. Disclaimer Please note - This workflow is only available on n8n self-hosted as it’s making use of the community node for the Decodo Web Scraping Who this is for? This workflow is ideal for: E-commerce product researchers Marketplace sellers (Amazon, Flipkart, Shopify, etc.) Competitive intelligence teams Product comparison bloggers and reviewers Pricing and product analytics engineers Automation builders needing AI-powered product insights What problem is this workflow solving? Manually extracting Amazon product details, ads, pricing, reviews, and competitive signals is: Time-consuming Requires switching across tools Difficult to analyze at scale Not structured for reporting Hard to compare products objectively This workflow automates: Web scraping of Amazon product pages Extraction of product features and ad listings AI-generated product summaries Competitive positioning analysis Generation of structured product insight output Export to Google Sheets for tracking and reporting What this workflow does This workflow performs an end-to-end product intelligence pipeline, including: Data Collection Scrapes an Amazon product page using Decodo Retrieves product details and advertisement placements Data Extraction Extracts: Product specs Key feature descriptions Ads data Supplemental metadata AI-Driven Analysis Generates: Descriptive product summary Competitive positioning insights Structured product insight schema Data Consolidation Merges descriptive, analytical, and structured outputs Export & Persistence Aggregates results Writes final dataset to Google Sheets for: tracking comparison reporting product research archives Setup Prerequisites If you are new to Decode, please signup on this link visit.decodo.com n8n instance** Decodo API credentials** OpenAI API credentials** Make sure to install the Decodo Community Node. Required Credentials Decodo API Go to Credentials Add Decodo API Enter API key Save as: Decodo Credentials account OpenAI API Go to Credentials Select OpenAI Enter API key Save as: OpenAi account Google Sheets Add Google Sheets OAuth Authorize via Google Save as desired account Inputs to configure Modify in Set the Input Fields node: product_url = https://www.amazon.in/Sony-DualSense-Controller-Grey-PlayStation/dp/B0BQXZ11B8 How to customize this workflow to your needs You can easily adapt this workflow for various use cases. Change the product being analyzed Modify: product_url Change AI model In OpenAI nodes: Replace gpt-4.1-mini Use Gemini, Claude, Mistral, Groq (if supported) Customize the insight schema Edit Product Insights node to include: sustainability markers sentiment extraction pricing bands safety compliance brand comparisons Expand data extraction You may extract: product reviews FAQs Q&A seller information delivery and logistics signals Change output destination Replace Google Sheets with: PostgreSQL MySQL Notion Slack Airtable Webhook delivery CSV export Turn it into a batch processor Loop over: multiple ASINs category listings search results pages Summary This workflow provides a complete automated product intelligence engine, combining Decodo’s scraping capabilities with OpenAI’s analytical reasoning to transform Amazon product pages into structured insights, competitive analysis, and summarized evaluations automatically stored for reporting and comparison.
by Oskar
This workflow uses AI to analyze the content of every new message in Gmail and then assigns specific labels, according to the context of the email. Default configuration of the workflow includes 3 labels: „Partnership” - email about sponsored content or cooperation, „Inquiry” - email about products, services, „Notification” - email that doesn't require response. You can add or edit labels and descriptions according to your use case. 🎬 See this workflow in action in my YouTube video about automating Gmail. How it works? Gmail trigger performs polling every minute for new messages (you can change the trigger interval according to your needs). The email content is then downloaded and forwarded to an AI chain. 💡 The prompt in the AI chain node includes instructions for applying labels according to the email content - change label names and instructions to fit your use case. Next, the workflow retrieves all labels from the Gmail account and compares them with the label names returned from the AI chain. Label IDs are aggregated and applied to processed email messages. ⚠️ Label names in the Gmail account and workflow (prompt, JSON schema) must be the same. Set up steps Set credentials for Gmail and OpenAI. Add labels to your Gmail account (e.g. „Partnership”, „Inquiry” and „Notification”). Change prompt in AI chain node (update list of label names and instructions). Change list of available labels in JSON schema in parser node. Optionally: change polling interval in Gmail trigger (by default interval is 1 minute). If you like this workflow, please subscribe to my YouTube channel and/or my newsletter.
by Jimleuk
This n8n workflow is designed to work on the local network and assists with reconciling downloaded bank statements with internal tenant records to quickly highlight any issues with payments such as missed or late payments or those of incorrect amounts. This assistant can then generate a report to quick flag attention to ensure remedial action is taken. How it works The workflow monitors a local network drive to watch for new bank statements that are added. This bank statement is then imported into the n8n workflow, its contents extracted and sent to the AI Agent. The AI Agent analyses the line items to identify the dates and any incoming payments from tenants. The AI agent then uses an locally-hosted Excel ("XLSX") spreadsheet to get both tenant records and property records. From this data, it can determine for each active tenant when payment is due, the amount and the tenancy duration. Comparing to the bank statement, the AI Agent can now report on where tenants have missed their payments, made late payments or are paying the incorrect amounts. The final report is generated and logged in the same XLSX for a human to check and action. Requirements A self-hosted version of n8n is required. OpenAI account for the AI model Customising this workflow If you organisation has a Slack or Teams account, consider sending reports to a channel for increased productivity. Email may be a good choice too. Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/1YRKjfakpInm23F_g8AHupKPBN-fphWgK/view?usp=sharing
by Jimleuk
This n8n template demonstrates how to calculate the evaluation metric "Correctness" which in this scenario, measures the compares and classifies the agent's response against a set of ground truths. The scoring approach is adapted from the open-source evaluations project RAGAS and you can see the source here https://github.com/explodinggradients/ragas/blob/main/ragas/src/ragas/metrics/_answer_correctness.py How it works This evaluation works best where the agent's response is allowed to be more verbose and conversational. For our scoring, we classify the agent's response into 3 buckets: True Positive (in answer and ground truth), False Positive (in answer but not ground truth) and False Negative (not in answer but in ground truth). We also calculate an average similarity score on the agent's response against all ground truths. The classification and the similarity score is then averaged to give the final score. A high score indicates the agent is accurate whereas a low score could indicate the agent has incorrect training data or is not providing a comprehensive enough answer. Requirements n8n version 1.94+ Check out this Google Sheet for a sample data https://docs.google.com/spreadsheets/d/1YOnu2JJjlxd787AuYcg-wKbkjyjyZFgASYVV0jsij5Y/edit?usp=sharing
by hana
How it works Fetch transaction notification emails (including attachments) Clean up data Let AI (Basic LLM Chain node) generate bookkeeping item Send to Google sheet Details The example fetch email from Gmail lables, suggested using filters to automatically orgianize email into the labels Data will send to "raw data" sheet Example google sheet: https://docs.google.com/spreadsheets/d/1_IhdHj8bxtsfH2MRqKuU2LzJuzm4DaeKSw46eFcyYts/edit?gid=1617968863#gid=1617968863
by Eric Mooney
Usecase: When a new service ticket is created in Taiga, it's often unclear whether it contains sufficient details to begin work. This workflow automates the triage process by: Using an AI model to extract key information from the ticket description. Automatically assigning values for: Type (Bug, Enhancement, Onboarding, Question) Severity (Wishlist, Minor, Normal, Important, Critical) Priority (Low, Normal, High) Status (New, Needs More Info, etc.) Detecting missing critical data and blocking the ticket if incomplete. Setup instructions here: https://github.com/emooney/Service_Ticket_Triage_Helper
by Kumar Shivam
🧠 AI Blog Generator for Shopify Products using GPT-4o The AI Blog Generator is an advanced automation workflow powered by n8n, integrating GPT-4o and Google Sheets to generate SEO-rich blog articles for Shopify products. It automates the entire process — from pulling product data, analyzing images for nutritional information, to producing structured HTML content ready for publishing — with zero manual writing. 💡 Key Advantages 🔗 Shopify Product Sync** Automatically pulls product data (title, description, images, etc.) via Shopify API. 🤖 AI-Powered Nutrition Extraction** Uses GPT-4o to intelligently analyze product images and extract nutritional information. ✍️ SEO Blog Generation** GPT-4o generates blog titles, meta descriptions, and complete articles using both product metadata and extracted nutritional info. 🗂️ Structured Content Output** Produces well-formatted HTML with headers, bullet points, and nutrition tables for seamless Shopify blog integration. 📄 Google Sheets Integration** Tracks blog creation, manages retries, and prevents duplicate publishing using a centralized Google Sheet. 📤 Shopify Blog API Integration** Publishes the generated blog to Shopify using a two-step blog + article API call. ⚙️ How It Works Manual Trigger Initiate the process using a test trigger or a scheduler. Fetch Products from Shopify Retrieves all product details including descriptions and images. Extract Product Images Splits and processes each image individually. OCR + Nutrition AI GPT-4o reads nutrition facts from product images. Skips items without valid info. Check Existing Logs References a Google Sheet to avoid duplicates and determine retry status. AI Blog Generation Creates a blog with headings, bullet points, intro, and a nutrition table. Shopify Blog + Article Posting Uses the Shopify API to publish the blog and its content. Update Google Sheet Logs the blog URL, HTML content, errors, and status for future reference. 🛠️ Setup Steps Shopify Node**: Connects to your Shopify store and fetches product data. Split Out Node**: Divides product images for individual OCR processing. OpenAI Node**: Uses GPT-4o to extract nutrition data from images. If Node**: Filters for entries with valid nutrition information. Edit Fields Node**: Formats the product data for AI processing. AI Agent Node**: Generates SEO blog content. Google Sheets Nodes**: Reads and updates blog creation status. HTTP Request Nodes**: Posts the blog and article via Shopify’s API. 🔐 Credentials Required Shopify Access Token** – For retrieving product data and posting blogs OpenAI API Key** – For GPT-4o-based AI generation and image processing Google Sheets OAuth** – For accessing the log sheet 👤 Ideal For Ecommerce teams looking to automate content for hundreds of products Shopify store owners aiming to boost organic traffic through blogging Marketing teams building scalable, AI-driven content workflows 💬 Bonus Tip The workflow is modular. You can easily extend it with internal linking, language translation, or even social media sharing — all within the same n8n flow.
by Jez
Summary This n8n workflow implements an AI-powered "Local Event Finder" agent. It takes user criteria (like event type, city, date, and interests), uses a suite of search tools (Brave Web Search, Brave Local Search, Google Gemini Search) and a web scraper (Jina AI) to find relevant events, and returns formatted details. The entire agent is exposed as a single, easy-to-use MCP (Multi-Capability Peer) tool, making it simple to integrate into other workflows or applications. This template cleverly combines the MCP server endpoint and the AI agent logic into a single n8n workflow file for ease of import and management. Key Features Intelligent Multi-Tool Search:** Dynamically utilizes web search, precise local search, and advanced Gemini semantic search to find events. Detailed Information via Web Scraping:** Employs Jina AI to extract comprehensive details directly from event web pages. Simplified MCP Tool Exposure:** Makes the complex event-finding logic available as a single, callable tool for other MCP-compatible clients (e.g., Roo Code, Cline, other n8n workflows). Customizable AI Behavior:** The core AI agent's behavior, tool usage strategy, and output formatting can be tailored by modifying its System Prompt. Modular Design:** Uses distinct nodes for LLM, memory, and each external tool, allowing for easier modification or extension. Benefits Simplifies Client-Side Integration:** Offloads the complexity of event searching and data extraction from client applications. Provides Richer Event Data:** Goes beyond simple search links to extract and format key event details. Flexible & Adaptable:** Can be adjusted to various event search needs and can incorporate new tools or data sources. Efficient Processing:** Leverages specialized tools for different aspects of the search process. Nodes Used MCP Trigger Tool Workflow Execute Workflow Trigger AI Agent Google Gemini Chat Model (ChatGoogleGenerativeAI) Simple Memory (Window Buffer Memory) MCP Client (for Brave Search tools via Smithery) Google Gemini Search Tool Jina AI Tool Prerequisites An active n8n instance. Google AI API Key:** For the Gemini LLM (Google Gemini Chat Model node) and the Google Gemini Search Tool. Ensure your key is enabled for these services. Jina AI API Key:** For the jina_ai_web_page_scraper node. A free tier is often available. Access to a Brave Search MCP Provider (Optional but Recommended):** This template uses MCP Client nodes configured for Brave Search via a provider like Smithery. You'll need an account/API key for your chosen Brave Search MCP provider to configure the smithery brave search credential. Alternatively, you could adapt these to call Brave Search API directly if you manage your own access, or replace them with other search tools. Setup Instructions Import Workflow: Download the JSON file for this template and import it into your n8n instance. Configure Credentials: Google Gemini LLM: Locate the Google Gemini Chat Model node. Select or create a "Google Gemini API" credential (named Google Gemini Context7 in the template) using your Google AI API Key. Google Gemini Search Tool: Locate the google_gemini_event_search node. Select or create a "Gemini API" credential (named Gemini Credentials account in the template) using your Google AI API Key (ensure it's enabled for Search/Vertex AI). Jina AI Web Scraper: Locate the jina_ai_web_page_scraper node. Select or create a "Jina AI API" credential (named Jina AI account in the template) using your Jina AI API Key. Brave Search (via MCP): You'll need an MCP Client HTTP API credential to connect to your Brave Search MCP provider (e.g., Smithery). Create a new "MCP Client HTTP API" credential in n8n. Name it, for example, smithery brave search. Configure it with the Base URL and any required authentication (e.g., API key in headers) for your Brave Search MCP provider. Locate the brave_web_search and brave_local_search MCP Client nodes in the workflow. Assign the smithery brave search (or your named credential) to both of these nodes. Activate Workflow: Ensure the workflow is active. Note MCP Trigger Path: Locate the local_event_finder (MCP Trigger) node. The Path field (e.g., 0ca88864-ec0a-4c27-a7ec-e28c5a900697) combined with your n8n webhook base URL forms the endpoint for client calls. Example Endpoint: YOUR_N8N_INSTANCE_URL/webhooks/PATH-TO-MCP-SERVER Customization AI Behavior:** Modify the "System Message" parameter within the event_finder_agent node to change the AI's persona, its strategy for using tools, or the desired output format. LLM Model:** Swap the Google Gemini Chat Model node with another compatible LLM node (e.g., OpenAI Chat Model) if desired. You'll need to adjust credentials and potentially the system prompt. Tools:** Add, remove, or replace tool nodes (e.g., use a different search provider, add a weather API tool) and update the event_finder_agent's system prompt and tool configuration accordingly. Scraping Depth:** Be mindful of the jina_ai_web_page_scraper's usage due to potential timeouts. The system prompt already guides the LLM on this, but you can adjust its usage instructions.