by Cyril Nicko Gaspar
🔍 Email Lookup with Google Search from Postgres Database This N8N workflow is designed to enrich seller data stored in a Postgres database by performing automated Google search lookups. It uses Bright Data's Web Unlocker to bypass search result restrictions and the HTML Extract node to parse and extract relevant information from webpages. The main purpose of this workflow is to discover missing contact details, company domains, and secondary emails for businesses or sellers based on existing database entries. 🎯 Problem This Workflow Solves Manually searching for missing seller or business details—like secondary emails, websites, or domain names—can be time-consuming and inefficient, especially for large datasets. This workflow automates the search and data enrichment process, significantly reducing manual effort while improving the quality and completeness of your seller database. ✅ Prerequisites Before using this template, make sure the following requirements are met: ✔️ A Bright Data account with access to the Web Unlocker or Amazon Scraper API ✔️ A valid Bright Data API key ✔️ An active PostgreSQL database with seller data ✔️ N8N self-hosted instance (recommended for using community nodes like n8n-nodes-brightdata) ✔️ Installed n8n-nodes-brightdata package (custom node for Bright Data integration) ⚙️ Setup Instructions Step 1: Prepare Your Postgres Table Create a table in Postgres with the following structure (you can adjust field names if needed): CREATE TABLE sellers ( seller_id SERIAL PRIMARY KEY, seller_name TEXT, primary_email TEXT, company_info TEXT, trade_name TEXT, business_address TEXT, coc_number TEXT, vat_number TEXT, commercial_register TEXT, secondary_email TEXT, domain TEXT, seller_slug TEXT, source TEXT ); Step 2: Setup Web Unlocker on Bright Data Go to your Bright Data dashboard. Navigate to Proxies & Scraping → Web Unlocker. Create a new zone, selecting Web Unlocker API under Scraping Solutions. Whitelist your server IP if required. Step 3: Generate API Key In the Bright Data dashboard, go to the API section. Generate a new API key. In N8N, create HTTP Request Credentials using Bearer Authentication with the API key. Step 4: Install the Bright Data Node in N8N In your N8N self-hosted instance, go to Settings → Community Nodes. Search and install n8n-nodes-brightdata. 🔄 Workflow Functionality 🔁 Trigger: Can be set to run on a schedule (e.g., daily) or manually. 📥 Read: Fetches seller records from the Postgres table. 🌐 Search: Uses Bright Data to perform a Google search based on seller_name, company_info, or trade_name. 🧾 Extract: Parses the HTML content using the HTML Extract node to identify potential websites and email addresses. 📝 Update: Writes enriched data (like domain or secondary_email) back to the Postgres table. 💡 Use Cases Lead enrichment for e-commerce sellers Domain and contact info discovery for B2B databases Email and web domain verification for CRM systems Market research automation 🛠️ Customization Tips You can enhance the parsing logic in the HTML Extract node to look for phone numbers, LinkedIn profiles, or social media links. Modify the search query logic to include additional parameters like location or industry for more refined results. Integrate additional APIs (e.g., Hunter.io, Clearbit) for email validation or social profile enrichment. Add filtering to skip entries that already have domain or secondary_email.
by Don Jayamaha Jr
📡 This workflow serves as the central Alpha Vantage API fetcher for Tesla trading indicators, delivering cleaned 20-point JSON outputs for three timeframes: 15min, 1hour, and 1day. It is required by the following agents: Tesla 15min, 1h, 1d Indicators Tools Tesla Financial Market Data Analyst Tool ✅ Requires an Alpha Vantage Premium API Key 🚀 Used as a sub-agent via webhook endpoints triggered by other workflows 📈 What It Does For each timeframe (15min, 1h, 1d), this tool: Triggers 6 technical indicators via Alpha Vantage: RSI MACD BBANDS SMA EMA ADX Trims the raw response to the latest 20 data points Reformats into a clean JSON structure: { "indicator": "MACD", "timeframe": "1hour", "data": { "timestamp": "...", "macd": 0.32, "signal": 0.29 } } Returns results via Webhook Respond for the calling agent 📂 Required Credentials 🔑 Alpha Vantage Premium API Key Set up under Credentials > HTTP Query Auth Name: Alpha Vantage Premium Query Param: apikey Get yours here: https://www.alphavantage.co/premium/ 🛠️ Setup Steps Import Workflow into n8n Name it: Tesla_Quant_Technical_Indicators_Webhooks_Tool Add HTTP Query Auth Credential Name: Alpha Vantage Premium Param key: apikey Value: your Alpha Vantage key Publish and Use the Webhooks This workflow exposes 3 endpoints: /15minData → used by 15m Indicator Tool /1hourData → used by 1h Indicator Tool /1dayData → used by 1d Indicator Tool Connect via Execute Workflow or HTTP Request Ensure caller sends webhook trigger correctly to the path 🧱 Architecture Summary Each timeframe section includes: | Component | Details | | ------------------ | --------------------------------------------- | | 📡 Webhook Trigger | Entry node (/15minData, /1hourData, etc.) | | 🔄 API Calls | 6 nodes fetching indicators via Alpha Vantage | | 🧹 Formatters | JS Code nodes to clean and trim responses | | 🧩 Merge Node | Consolidates cleaned JSONs | | 🚀 Webhook Respond | Returns structured data to calling workflow | 🧾 Sticky Notes Overview ✅ Webhook Entry: Instructions per timeframe ✅ API Call Summary: Alpha Vantage endpoint for each indicator ✅ Format Nodes: Explain JSON parsing and cleaning ✅ Merge Logic: Final output format ✅ Webhook Response: What gets returned to caller All stickies follow n8n standard color-coding: Blue = Webhook flow Yellow = API request group Purple = Formatters Green = Merge step Gray = Workflow overview and usage 🔐 Licensing & Support © 2025 Treasurium Capital Limited Company This agent is part of the Tesla Quant AI Trading System and protected under U.S. copyright. For support: 🔗 Don Jayamaha – LinkedIn 🔗 n8n Creator Profile 🚀 Use this API tool to feed Tesla technical indicators into any AI or trading agent across 15m, 1h, and 1d timeframes. Required for all Tesla Quant Agent indicator tools.
by ist00dent
This n8n template lets you automatically pull market data for the cryptocurrencies from CoinGecko every hour, calculate custom volatility and market-health metrics, classify each coin’s price action into buy/sell/hold/neutral signals with risk ratings, and expose both individual analyses and a portfolio summary via a webhook. It’s perfect for crypto analysts, DeFi builders, or portfolio managers who want on-demand insights without writing a single line of backend code. 🔧 How it works Schedule Trigger fires every hour (or interval you choose). HTTP Request (CoinGecko) fetches the top 10 coins by market cap, including 24 h, 7 d, and 30 d price change percentages. Split In Batches ensures each coin is processed sequentially. Function (Calculate Market Metrics) computes: A weighted volatility score Market-cap-to-volume ratio Price-to-ATH ratio Composite market score IF & Switch nodes categorize each coin’s 24 h price action (up >5%, down >5%, high volatility, or stable) and append: signal (BUY/SELL/HOLD/NEUTRAL) riskRating (High/Medium/Low/Unknown) recommendation & investmentStrategy guidance NoOp & Merge nodes consolidate each branch back into a single data stream. Function (Generate Portfolio Summary) aggregates all analyses into: A Markdown portfolioSummary Counts of buy/sell/hold/neutral signals Risk distribution Webhook Response returns the full JSON payload with individual analyses and the summary for downstream consumers. 👤 Who is it for? This workflow is ideal for: Crypto researchers and analysts who need scheduled market insights DeFi and trading bot developers looking to automate signal generation Portfolio managers seeking a no-code overview of top assets Automation engineers exploring API integration and data enrichment 📑 Data Structure When you trigger the webhook, you’ll receive a JSON object containing: individualAnalyses: Array of { coin, symbol, currentPrice, priceChanges, marketMetrics, signal, riskRating, recommendation } portfolioSummary: Markdown report summarizing signals, risk distribution, and top opportunity marketSignals: Counts of each signal type riskDistribution: Counts of each risk rating timestamp: ISO string of analysis time ⚙️ Setup Instructions Import: In n8n Editor → click “Import from JSON” → paste this workflow JSON. Configure Schedule: Double-click the Schedule Trigger → set your desired interval (default: every hour). Webhook Path: Open the Webhook node → choose a unique path (e.g., /crypto‐analysis) and “POST”. Activate: Save and activate the workflow. Test: Open the webhook url to other tab or use cURL curl -X POST https://<your-n8n-host>/webhook/<path> You’ll get back a JSON payload with both portfolioSummary and individualAnalyses. 📝 Tips Rate-Limit Handling: If CoinGecko returns 429, insert a Delay node (e.g., 500 ms) after the HTTP Request. Batch Size: Default is 1 coin at a time; you can bump it to parallelize. Customization: Tweak volatility weightings or add new metrics directly in the “Calculate Market Metrics” Function node. Extension: Swap CoinGecko for another API by updating the HTTP Request URL and field mappings.
by Zacharia Kimotho
Workflow documentation updated on 21 May 2025 This workflow keeps track of your brand mentions across different Facebook groups and provides an analysis of the posts as positive, negative or neutral and updates this to Googe sheets for further analysis This is useful and relevants for brands looking to keep track of what people are saying about their brands and guage the customer satisfaction or disatisfaction based on what they are talking about Who is this template for? This workflow is for you if You Need to keep track of your brand sentiments across different niche facebook groups Own a saas and want to monitor it across different local facebook Groups Are looking to do some competitor research to understand what others dont like about their products Are testing the market on different market offerings and products to get best results Are looking for sources other that review sites for product, software or service reviews Need to keep track of your brand sentiments across different niche facebook groups Are starting on market research and would like to get insights from differnt facebook groups on app usage, strngths weaknesses, features etc How it works You will set the desired schedule by which to monitor the groups This gets the brand names and facebook Groups to monitor. Setup Steps Before you begin You will need access to a Bright Data API to run this workflows Make a copy of the sheet below and add the urls for the facebook groups to scrap and the brand names you wish to monitor. Import the workflow json to your canvas Make a copy of this Google sheet to get started easily Set your APi key in the Map out the Google sheet to your tables You can use/update the current AI models to differnt models eg Gemini or anthropic Run the workflow Setup B Bright Data provides an option to receive the results on an external webhook via a POST call. This can be collected via the
by David Ashby
Complete MCP server exposing 1 IP2WHOIS Domain Lookup API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add IP2WHOIS Domain Lookup credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the IP2WHOIS Domain Lookup API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ip2whois.com/v2 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 General (1 endpoints) • GET /: Lookup WHOIS Data 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native IP2WHOIS Domain Lookup API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 1 Recommendation API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Recommendation API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Recommendation API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 Find (1 endpoints) • POST /find: Get Promoted Listings Recommendations 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Recommendation API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 1 Buy Marketing API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Buy Marketing API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Buy Marketing API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com/buy/marketing/v1_beta • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 Merchandised_Product (1 endpoints) • GET /merchandised_product: Fetch Merchandised Products 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Buy Marketing API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Alfonso Corretti
Who is this for? 🧑🏻🫱🏻🫲🏻🤖 Humans and Robots alike. This workflow can be used as a Chat Trigger, as well as a Workflow Trigger. It will take a natural language request, and then generate a SQL query. The resulting query parameter will contain the query, and a sqloutput parameter will contain the results of executing such query. What's the use case? This template is most useful paired with other workflows that extract e-mail information and store it in a structured Postgres table, and use LLMs to understand inquiries about information contained in an e-mail inbox and formulate questions that needs answering. Plus, the prompt can be easily adapted to formulate SQL queries over any kind of structured database. Privacy and Economics As LLM provider I'm using Ollama locally, as I consider my e-mail extremely sensitive information. As model, phi4-mini does an excellent job balancing quality and efficiency. Setup Upon running for the first time, this workflow will automatically trigger a sub-section to read all tables and extract their schema into a local file. Then, either by chatting with the workflow in n8n's interface or by using it as a sub-workflow, you will get a query and a sqloutput response. Customizations If you want to work with just one particular table yet keep edits at bay, append a condition to the List all tables in a database step, like so: WHERE table_schema='public' AND table_name='my_emails_table_name' To repurpose this workflow to work with any other data corpus in a structured database, inspect the AI Agent user and system prompts and edit them accordingly.
by Billy Christi
Who is this for? This workflow is perfect for: Businesses and teams who need an automated solution to organize, analyze, and retrieve insights from their internal documents. Researchers who want to quickly analyze and query large collections of research papers, reports, or datasets. Customer support teams looking to streamline access to product documentation and support resources. Legal and compliance professionals needing to reference and query legal documents with confidence. AI enthusiasts and developers wanting to implement Retrieval-Augmented Generation (RAG) systems without starting from scratch. What problem is this workflow solving? Manually organizing, processing, and searching through documents can be time-consuming, error-prone, and inefficient. This workflow solves that by: Automating document processing** from Google Drive, supporting multiple formats like PDFs, CSVs, and Google Docs. Extracting, chunking, and enhancing document text**, preserving context and improving AI comprehension. Storing vector embeddings** in a secure, scalable Supabase vector database, enabling semantic search and retrieval. Providing an interactive AI chat interface** that allows users to ask natural language questions and get precise, document-based answers. This means teams can quickly access relevant insights from their document repositories—boosting productivity and ensuring accurate information retrieval. Key Features 🚀 End-to-End Document Processing: From Google Drive upload detection to vector embedding and storage. 🔍 Semantic Search & Retrieval: Users can ask complex, natural-language questions and receive contextually relevant answers. 🤖 AI-Powered Summaries & Metadata: Automatically generates document titles and summaries using Google Gemini AI. 📝 Smart Chunking & Contextual Enhancement: Breaks documents into smart chunks with overlap, preserving context and table integrity. 🔐 Secure & Scalable Vector Database: Stores and retrieves embeddings in a Supabase vector store for fast, reliable searches. 💬 Conversational AI Interface: Uses OpenAI to power natural, accurate, and cost-effective AI chat interactions. How does this workflow work? Monitors Google Drive for new files Extracts text from PDFs and CSVs (or Google Docs auto-converted) Splits text into context-preserving chunks Enhances chunk quality and stores embeddings in Supabase Enables natural language search and AI-powered chat interactions with the stored documents Typical Use Cases 📚 Corporate Knowledge Base 🔬 Research Paper Analysis 📞 Customer Support Document Query ⚖️ Legal Document Review and Analysis 🔍 Internal Team Documentation Search Why You’ll Love It This workflow lets you build a scalable, searchable, and AI-powered document system—without needing to write complex code or manage multiple systems. With this, you can: Stay organized with automated document processing. Deliver faster, more accurate answers to user queries. Reduce manual work and improve productivity. Gain a competitive edge with cutting-edge AI search capabilities. Setup Requirements An n8n instance with Google Drive, Supabase, OpenAI, and Gemini credentials configured. Access to a Supabase vector store for storing document embeddings. Configurable chunk size, overlap, and processing limits (default: 1000 characters per chunk, 20 chunks max).
by Hardikkumar
This workflow automates the entire process of creating SEO-optimized meta titles and descriptions. It analyzes your webpage, spies on top-ranking competitors for the same keywords, and then uses a multi-step AI process to generate compelling, length-constrained meta tags. 🤖 How It Works This workflow operates in a three-phase process for each URL you provide: Phase 1: Self-Analysis When you add a URL to a Google Sheet with the status "New", the workflow scrapes your page's content. The first AI then performs a deep analysis to identify the page's primary keyword, semantic keyword cluster, search intent, and target audience. Phase 2: Competitor Intelligence The workflow takes your primary keyword and performs a live Google search. A custom code block intelligently filters the search results to identify true competitors. A second AI analyzes their meta titles and descriptions to find common patterns and successful strategies. Phase 3: Master Generation & Update The final AI synthesizes all gathered intelligence—your page's data and the competitor's winning patterns—to generate a new, optimized meta title and description. It then writes this new data back to your Google Sheet and updates the status to "Generated". ⚙️ Setup Instructions You should be able to set up this workflow in about 10-15 minutes ⏱️. 🔑 Prerequisites You will need the following accounts and API keys: A Google Account with access to Google Sheets. A Google AI / Gemini API key. A SerpApi key for Google search data. A ScrapingDog API key for reliable website scraping. 🛠️ Configuration Google Sheet Setup: Create a new Google Sheet. The workflow requires the following columns: URL, Status, Current Meta Title, Current Meta Description, Generated Meta Title, Generated Meta Description, and Ranking Factor. Add Credentials: Google Sheets Nodes: Connect your Google account credentials to the Google Sheets Trigger & Google Sheets nodes. Google Gemini Nodes: Add your Google Gemini API key to the credentials for all three Google Gemini Chat Model nodes. Scrape Website Node: In this HTTP Request node, go to Query Parameters and replace <your-api-key> with your ScrapingDog API key. Googl SERP Node: In this HTTP Request node, go to Query Parameters and replace <your-api-key> with your SerpApi API key. Configure Google Sheets Nodes: Copy the Document ID from your Google Sheet's URL. Paste this ID into the "Document ID" field in the following nodes: Google Sheets Trigger, Get row(s) in sheet1, and Update row in sheet. In each of those nodes, select the correct sheet name from the "Sheet Name" dropdown. ✅ Activate Workflow Save and activate the workflow. To run it, simply add a new row to your Google Sheet containing the URL you want to process and set the "Status" column to New.
by Alex Gurinovich
AI powered Automated Crypto Insights with Chart-img and BrowserAI Tired of paying for costly crypto updates? Or reading long analyses? This n8n workflow automates the delivery of personalized crypto insights, using Chart-img for capturing coin graphs of BTC, ETH, SOL, and XRP as base64 images, and BrowserAI for web scraping and information gathering of news and articles. This setup ensures thorough market coverage and timely updates, without breaking the bank. Overview Designed for crypto enthusiasts, traders, and analysts, this workflow automates the process of collecting and distributing valuable crypto information. It’s perfect for anyone wanting consistent and accurate updates conveniently. Setup Instructions Pre-conditions Chart-img Account: Register for a Chart-img account and obtain an API key here. BrowserAI Account: Sign up for BrowserAI and get your API key from your BrowserAI dashboard. Step-by-Step Setup 🗓️ Schedule and Date Calculation Triggers twice daily at 8AM and 8PM to ensure up-to-date insights, and can be changed to your like. Calculates yesterday’s date dynamically for accurate data retrieval. 📊 Coin Graph Capture with Chart-img Uses Chart-img API to capture 24-hour graphs for BTC, ETH, SOL, and XRP. Converts images to base64 strings for easy integration into analysis. 🌐 Web Scraping with BrowserAI Creates tasks in BrowserAI to gather the latest crypto news and insights. Automates data extraction for comprehensive market analysis. ⌛ Monitor and Complete Tasks Incorporates status checks to ensure BrowserAI tasks complete successfully before proceeding. ✏️ Analyze and Synthesize Information Combines graph data with web-scraped insights for an enriched summary. Uses AI to generate simple, informative descriptions under 60 words to not overload you. 📩 Deliver Insights Efficiently Sends the compiled analysis to your Telegram, with easy options to switch to WhatsApp, email, or any other communication channel. Customization Guidance Content Personalization:** Customize the datasets and keywords for tailored updates. Modify Schedule:** Adjust triggering times according to your needs using n8n’s scheduling options. This workflow delivers a seamless and cost-effective approach to staying informed about crypto market trends, combining the latest technology for superior insights. ++WARNING:++ This template is intended for personal use only and does not constitute financial advice. Any actions taken using this tool are solely the user's responsibility.
by Adam Bertram
An intelligent IT support agent that uses Azure AI Search for knowledge retrieval, Microsoft Entra ID integration for user management, and Jira for ticket creation. The agent can answer questions using internal documentation and perform administrative tasks like password resets. How It Works The workflow operates in three main sections: Agent Chat Interface: A chat trigger receives user messages and routes them to an AI agent powered by Google Gemini. The agent maintains conversation context using buffer memory and has access to multiple tools for different tasks. Knowledge Management: Users can upload documentation files (.txt, .md) through a form trigger. These documents are processed, converted to embeddings using OpenAI's API, and stored in an Azure AI Search index with vector search capabilities. Administrative Tools: The agent can query Microsoft Entra ID to find users, reset passwords, and create Jira tickets when issues need escalation. It uses semantic search to find relevant internal documentation before responding to user queries. The workflow includes a separate setup section that creates the Azure AI Search service and index with proper vector search configuration, semantic search capabilities, and the required field schema. Prerequisites To use this template, you'll need: n8n cloud or self-hosted instance Azure subscription with permissions to create AI Search services Microsoft Entra ID (Azure AD) access with user management permissions OpenAI API account for embeddings Google Gemini API access Jira Software Cloud instance Basic understanding of Azure resource management Setup Instructions Import the template into n8n. Configure credentials: Add Google Gemini API credentials Add OpenAI API credentials for embeddings Add Microsoft Azure OAuth2 credentials with appropriate permissions Add Microsoft Entra ID OAuth2 credentials Add Jira Software Cloud API credentials Update workflow parameters: Open the "Set Common Fields" nodes Replace <azure subscription id> with your Azure subscription ID Replace <azure resource group> with your target resource group name Replace <azure region> with your preferred Azure region Replace <azure ai search service name> with your desired service name Replace <azure ai search index name> with your desired index name Update the Jira project ID in the "Create Jira Ticket" node Set up Azure infrastructure: Run the manual trigger "When clicking 'Test workflow'" to create the Azure AI Search service and index This creates the vector search index with semantic search configuration Configure the vector store webhook: Update the "Invoke Query Vector Store Webhook" node URL with your actual webhook endpoint The webhook URL should point to the "Semantic Search" webhook in the same workflow Upload knowledge base: Use the "On Knowledge Upload" form to upload your internal documentation Supported formats: .txt and .md files Documents will be automatically embedded and indexed Test the setup: Use the chat interface to verify the agent responds appropriately Test knowledge retrieval with questions about uploaded documentation Verify Entra ID integration and Jira ticket creation Security Considerations Use least-privilege access for all API credentials Microsoft Entra ID credentials should have limited user management permissions Azure credentials need Search Service Contributor and Search Index Data Contributor roles OpenAI API key should have usage limits configured Jira credentials should be restricted to specific projects Consider implementing rate limiting on the chat interface Review password reset policies and ensure force password change is enabled Validate all user inputs before processing administrative requests Extending the Template You could enhance this template by: Adding support for additional file formats (PDF, DOCX) in the knowledge upload Implementing role-based access control for different administrative functions Adding integration with other ITSM tools beyond Jira Creating automated escalation rules based on query complexity Adding analytics and reporting for support interactions Implementing multi-language support for international organizations Adding approval workflows for sensitive administrative actions Integrating with Microsoft Teams or Slack for notifications