by Daniel Agrici
This workflow automates business intelligence. Submit one URL, and it scrapes the website, uses AI to perform a comprehensive analysis, and generates a professional report in Google Doc and PDF format. It's perfect for agencies, freelancers, and consultants who need to streamline client research or competitive analysis. How It Works The workflow is triggered by a form input, where you provide a single website URL. Scrape: It uses Firecrawl to scrape the sitemap and get the full content from the target website. Analyze: The main workflow calls a Tools Workflow (included below) which uses Google Gemini and Perplexity AI agents to analyze the scraped content and extract key business information. Generate & Deliver: All the extracted data is formatted and used to populate a template in Google Docs. The final report is saved to Google Drive and delivered via Gmail. What It Generates The final report is a comprehensive business analysis, including: Business Overview: A full company description. Target Audience Personas: Defines the demographic and psychographic profiles of ideal customers. Brand & UVP: Extracts the brand's personality matrix and its Unique Value Proposition (UVP). Customer Journey: Maps the typical customer journey from Awareness to Loyalty. Required Tools This workflow requires n8n and API keys/credentials for the following services: Firecrawl (for scraping) Perplexity (for AI analysis) Google Gemini (for AI analysis) Google Services (for Docs, Drive, and Gmail) ⚠️ Required: Tools Workflow This workflow will not work without its "Tools" sub-workflow. Please create a new, separate workflow in n8n, name it (e.g., "Business Analysis Tools"), and paste the following code into it. { "name": "Business Analysis Workflow Tools", "nodes": [ { "parameters": { "workflowInputs": { "values": [ { "name": "function" }, { "name": "keyword" }, { "name": "url" }, { "name": "location_code" }, { "name": "language_code" } ] } }, "type": "n8n-nodes-base.executeWorkflowTrigger", "typeVersion": 1.1, "position": [ -448, 800 ], "id": "e79e0605-f9ac-4166-894c-e5aa9bd75bac", "name": "When Executed by Another Workflow" }, { "parameters": { "rules": { "values": [ { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "8d7d3035-3a57-47ee-b1d1-dd7bfcab9114", "leftValue": "serp_search", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "serp_search" }, { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "bb2c23eb-862d-4582-961e-5a8d8338842c", "leftValue": "ai_mode", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "ai_mode" }, { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "4603eee1-3888-4e32-b3b9-4f299dfd6df3", "leftValue": "internal_links", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "internal_links" } ] }, "options": {} }, "type": "n8n-nodes-base.switch", "typeVersion": 3.2, "position": [ -208, 784 ], "id": "72c37890-7054-48d8-a508-47ed981551d6", "name": "Switch" }, { "parameters": { "method": "POST", "url": "https://api.dataforseo.com/v3/serp/google/organic/live/advanced", "authentication": "genericCredentialType", "genericAuthType": "httpBasicAuth", "sendBody": true, "specifyBody": "json", "jsonBody": "=[\n {\n \"keyword\": \"{{ $json.keyword.replace(/[:'\"\\\\/]/g, '') }}\",\n \"location_code\": {{ $json.location_code }},\n \"language_code\": \"{{ $json.language_code }}\",\n \"depth\": 10,\n \"group_organic_results\": true,\n \"load_async_ai_overview\": true,\n \"people_also_ask_click_depth\": 1\n }\n]", "options": { "redirect": { "redirect": { "followRedirects": false } } } }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 384, 512 ], "id": "6203f722-b590-4a25-8953-8753a44eb3cb", "name": "SERP Google", "credentials": { "httpBasicAuth": { "id": "n5o00CCWcmHFeI1p", "name": "DataForSEO" } } }, { "parameters": { "content": "## SERP Google", "height": 272, "width": 688, "color": 4 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 432 ], "id": "81593217-034f-466d-9055-03ab6b2d7d08", "name": "Sticky Note3" }, { "parameters": { "assignments": { "assignments": [ { "id": "97ef7ee0-bc97-4089-bc37-c0545e28ed9f", "name": "platform", "value": "={{ $json.tasks[0].data.se }}", "type": "string" }, { "id": "9299e6bb-bd36-4691-bc6c-655795a6226e", "name": "type", "value": "={{ $json.tasks[0].data.se_type }}", "type": "string" }, { "id": "2dc26c8e-713c-4a59-a353-9d9259109e74", "name": "keyword", "value": "={{ $json.tasks[0].data.keyword }}", "type": "string" }, { "id": "84c9be31-8f1d-4a67-9d13-897910d7ec18", "name": "results", "value": "={{ $json.tasks[0].result }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 592, 512 ], "id": "a916551a-009b-403f-b02e-3951d54d2407", "name": "Prepare SERP output" }, { "parameters": { "content": "# Google Organic Search API\n\nThis API lets you retrieve real-time Google search results with a wide range of parameters and custom settings. \nThe response includes structured data for all available SERP features, along with a direct URL to the search results page. \n\n👉 Documentation\n", "height": 272, "width": 496, "color": 4 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 976, 432 ], "id": "87672b01-7477-4b43-9ccc-523ef8d91c64", "name": "Sticky Note17" }, { "parameters": { "method": "POST", "url": "https://api.dataforseo.com/v3/serp/google/ai_mode/live/advanced", "authentication": "genericCredentialType", "genericAuthType": "httpBasicAuth", "sendBody": true, "specifyBody": "json", "jsonBody": "=[\n {\n \"keyword\": \"{{ $json.keyword }}\",\n \"location_code\": {{ $json.location_code }},\n \"language_code\": \"{{ $json.language_code }}\",\n \"device\": \"mobile\",\n \"os\": \"android\"\n }\n]", "options": { "redirect": { "redirect": {} } } }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 384, 800 ], "id": "fb0001c4-d590-45b3-a3d0-cac7174741d3", "name": "AI Mode", "credentials": { "httpBasicAuth": { "id": "n5o00CCWcmHFeI1p", "name": "DataForSEO" } } }, { "parameters": { "content": "## AI Mode", "height": 272, "width": 512, "color": 6 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 720 ], "id": "2cea3312-31f8-4ff0-b385-5b76b836274c", "name": "Sticky Note11" }, { "parameters": { "assignments": { "assignments": [ { "id": "b822f458-ebf2-4a37-9906-b6a2606e6106", "name": "keyword", "value": "={{ $json.tasks[0].data.keyword }}", "type": "string" }, { "id": "10484675-b107-4157-bc7e-b942d8cdb5d2", "name": "result", "value": "={{ $json.tasks[0].result[0].items }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 592, 800 ], "id": "6b1e7239-ee2b-4457-8acb-17ce87415729", "name": "Prepare AI Mode Output" }, { "parameters": { "content": "# Google AI Mode API\n\nThis API provides AI-generated search result summaries and insights from Google. \nIt returns detailed explanations, overviews, and related information based on search queries, with parameters to customize the AI overview. \n\n👉 Documentation\n", "height": 272, "width": 496, "color": 6 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 800, 720 ], "id": "d761dc57-e35d-4052-a360-71170a155f7b", "name": "Sticky Note18" }, { "parameters": { "content": "## Input", "height": 384, "width": 544, "color": 7 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -528, 672 ], "id": "db90385e-f921-4a9c-89f3-53fc5825b207", "name": "Sticky Note" }, { "parameters": { "assignments": { "assignments": [ { "id": "b865f4a0-b4c3-4dde-bf18-3da933ab21af", "name": "platform", "value": "={{ $json.platform }}", "type": "string" }, { "id": "476e07ca-ccf6-43d4-acb4-4cc905464314", "name": "type", "value": "={{ $json.type }}", "type": "string" }, { "id": "f1a14eb8-9f10-4198-bbc7-17091532b38e", "name": "keyword", "value": "={{ $json.keyword }}", "type": "string" }, { "id": "181791a0-1d88-481c-8d98-a86242bb2135", "name": "results", "value": "={{ $json.results[0].items }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 800, 512 ], "id": "83fef061-5e0b-417c-b1f6-d34eb712fac6", "name": "Sort Results" }, { "parameters": { "content": "## Internal Links", "height": 272, "width": 272, "color": 5 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 1024 ], "id": "9246601a-f133-4ca3-aac8-989cb45e6cd2", "name": "Sticky Note7" }, { "parameters": { "method": "POST", "url": "https://api.firecrawl.dev/v2/map", "sendHeaders": true, "headerParameters": { "parameters": [ { "name": "Authorization", "value": "Bearer your-firecrawl-apikey" } ] }, "sendBody": true, "specifyBody": "json", "jsonBody": "={\n \"url\": \"https://{{ $json.url }}\",\n \"limit\": 400,\n \"includeSubdomains\": false,\n \"sitemap\": \"include\"\n }", "options": {} }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 368, 1104 ], "id": "fd6a33ae-6fb3-4331-ab6a-994048659116", "name": "Get Internal Links" }, { "parameters": { "content": "# Firecrawl Map API\n\nThis endpoint maps a website from a single URL and returns the list of discovered URLs (titles and descriptions when available) — extremely fast and useful for selecting which pages to scrape or for quickly enumerating site links. (Firecrawl)\n\nIt supports a search parameter to find relevant pages inside a site, location/languages options to emulate country/language (uses proxies when available), and SDK + cURL examples in the docs,\n\n👉 Documentation\n\n[1]: https://docs.firecrawl.dev/features/map \"Map | Firecrawl\"\n", "height": 272, "width": 624, "color": 5 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 560, 1024 ], "id": "08457204-93ff-4586-a76e-03907118be3c", "name": "Sticky Note24" } ], "pinData": { "When Executed by Another Workflow": [ { "json": { "function": "serp_search", "keyword": "villanyszerelő Largo Florida", "url": null, "location_code": 2840, "language_code": "hu" } } ] }, "connections": { "When Executed by Another Workflow": { "main": [ [ { "node": "Switch", "type": "main", "index": 0 } ] ] }, "Switch": { "main": [ [ { "node": "SERP Google", "type": "main", "index": 0 } ], [ { "node": "AI Mode", "type": "main", "index": 0 } ], [ { "node": "Get Internal Links", "type": "main", "index": 0 } ] ] }, "SERP Google": { "main": [ [ { "node": "Prepare SERP output", "type": "main", "index": 0 } ] ] }, "AI Mode": { "main": [ [ { "node": "Prepare AI Mode Output", "type": "main", "index": 0 } ] ] }, "Prepare SERP output": { "main": [ [ { "node": "Sort Results", "type": "main", "index": 0 } ] ] }, "Sort Results": { "main": [ [] ] } }, "active": false, "settings": { "executionOrder": "v1" }, "versionId": "6fce16d1-aa28-4939-9c2d-930d11c1e17f", "meta": { "instanceId": "1ee7b11b3a4bb285563e32fdddf3fbac26379ada529b942ee7cda230735046a1" }, "id": "VjpOW2V2aNV9HpQJ", "tags": [] } `
by Growth AI
Intelligent chatbot with custom knowledge base Who's it for Businesses, developers, and organizations who need a customizable AI chatbot for internal documentation access, customer support, e-commerce assistance, or any use case requiring intelligent conversation with access to specific knowledge bases. What it does This workflow creates a fully customizable AI chatbot that can be deployed on any platform supporting webhook triggers (websites, Slack, Teams, etc.). The chatbot accesses a personalized knowledge base stored in Supabase and can perform advanced actions like sending emails, scheduling appointments, or updating databases beyond simple conversation. How it works The workflow combines several powerful components: Webhook Trigger: Accepts messages from any platform that supports webhooks AI Agent: Processes user queries with customizable personality and instructions Vector Database: Searches relevant information from your Supabase knowledge base Memory System: Maintains conversation history for context and traceability Action Tools: Performs additional tasks like email sending or calendar booking Technical architecture Chat trigger connects directly to AI Agent Language model, memory, and vector store all connect as tools/components to the AI Agent Embeddings connect specifically to the Supabase Vector Store for similarity search Requirements Supabase account and project AI model API key (any LLM provider of your choice) OpenAI API key (for embeddings - this is covered in Cole Medin's tutorial) n8n built-in PostgreSQL access (for conversation memory) Platform-specific webhook configuration (optional) How to set up Step 1: Configure your trigger The template uses n8n's default chat trigger For external platforms: Replace with webhook trigger and configure your platform's webhook URL Supported platforms: Any service with webhook capabilities (websites, Slack, Teams, Discord, etc.) Step 2: Set up your knowledge base For creating and managing your vector database, follow this comprehensive guide: Watch Cole Medin's tutorial on document vectorization This video shows how to build a complete knowledge base on Supabase The tutorial covers document processing, embedding creation, and database optimization Important: The video explains the OpenAI embeddings configuration required for vector search Step 3: Configure the AI agent Define your prompt: Customize the agent's personality and role Example: "You are the virtual assistant for example.com. Help users by answering their questions about our products and services." Select your language model: Choose any AI provider you prefer (OpenAI, Anthropic, Google, etc.) Set behavior parameters: Define response style, tone, and limitations Step 4: Connect Supabase Vector Store Add the "Supabase Vector Store" tool to your agent Configure your Supabase project credentials Mode: Set to "retrieve-as-tool" for automatic agent integration Tool Description: Customize description (default: "Database") to describe your knowledge base Table configuration: Specify the table containing your knowledge base (example shows "growth_ai_documents") Ensure your table name matches your actual knowledge base structure Multiple tables: You can connect several tables for organized data structure The agent will automatically decide when to search the knowledge base based on user queries Step 5: Set up conversation memory (recommended) Use "Postgres Chat Memory" with n8n's built-in PostgreSQL credentials Configure table name: Choose a name for your chat history table (will be auto-created) Context Window Length: Set to 20 messages by default (adjustable based on your needs) Benefits: Conversation traceability and analytics Context retention across messages Unique conversation IDs for user sessions Stored in n8n's database, not Supabase How to customize the workflow Basic conversation features Response style: Modify prompts to change personality and tone Knowledge scope: Update Supabase tables to expand or focus the knowledge base Language support: Configure for multiple languages Response length: Set limits for concise or detailed answers Memory retention: Adjust context window length for longer or shorter conversation memory Advanced action capabilities The chatbot can be extended with additional tools for: Email automation: Send support emails when users request assistance Calendar integration: Book appointments directly in Google Calendar Database updates: Modify Airtable or other databases based on user interactions API integrations: Connect to external services and systems File handling: Process and analyze uploaded documents Platform-specific deployments Website integration Replace chat trigger with webhook trigger Configure your website's chat widget to send messages to the n8n webhook URL Handle response formatting for your specific chat interface Slack/Teams deployment Set up webhook trigger with Slack/Teams webhook URL Configure response formatting for platform-specific message structures Add platform-specific features (mentions, channels, etc.) E-commerce integration Connect to product databases Add order tracking capabilities Integrate with payment systems Configure support ticket creation Results interpretation Conversation management Chat history: All conversations stored in n8n's PostgreSQL database with unique IDs Context tracking: Agent maintains conversation flow and references previous messages Analytics potential: Historical data available for analysis and improvement Knowledge retrieval Semantic search: Vector database returns most relevant information based on meaning, not just keywords Automatic decision: Agent automatically determines when to search the knowledge base Source tracking: Ability to trace answers back to source documents Accuracy improvement: Continuously refine knowledge base based on user queries Use cases Internal applications Developer documentation: Quick access to technical guides and APIs HR support: Employee handbook and policy questions IT helpdesk: Troubleshooting guides and system information Training assistant: Learning materials and procedure guidance External customer service E-commerce support: Product information and order assistance Technical support: User manuals and troubleshooting Sales assistance: Product recommendations and pricing FAQ automation: Common questions and instant responses Specialized implementations Lead qualification: Gather customer information and schedule sales calls Appointment booking: Healthcare, consulting, or service appointments Order processing: Take orders and update inventory systems Multi-language support: Global customer service with language detection Workflow limitations Knowledge base dependency: Quality depends on source documentation and embedding setup Memory storage: Requires active n8n PostgreSQL connection for conversation history Platform restrictions: Some platforms may have webhook limitations Response time: Vector search may add slight delay to responses Token limits: Large context windows may increase API costs Embedding costs: OpenAI embeddings required for vector search functionality
by Toshiya Minami
Here’s a clean, English-only template description you can paste into the n8n “Description” field. Overview This workflow analyzes customer survey responses, groups them by sentiment (positive / neutral / negative), generates themes and insights with an AI agent, and delivers a consolidated report to your destinations (Google Sheets, Slack). It runs on a daily schedule and uses batch-based AI analysis for accuracy. Flow: Schedule → Fetch from Sheets → Group & batch (Code) → AI analysis → Aggregate → Save/Notify (Sheets, Slack) What You’ll Need A survey data source (Google Sheets recommended) AI model credentials (e.g., OpenAI or OpenRouter) Optional destinations: Google Sheets (summary sheet), Slack channel Setup Data source (Google Sheets) In Get Survey Responses, replace YOUR_SHEET_ID and YOUR_SHEET_NAME with your sheet details. Ensure the sheet includes columns like: 満足度 (Rating), 自由記述コメント (Comment), 回答日時 (Timestamp). AI model Add credentials to your preferred LLM node (OpenAI/OpenRouter). Keep the prompt’s JSON-only requirement so the structured parser can consume it reliably. Destinations Save to Sheet: set your output documentId / sheetName. Slack: set the target channelId on the Slack node. How It Works Daily Schedule Trigger — starts the workflow at your chosen time. Get Survey Responses (Sheets) — reads survey data. Group & Prepare Data (Code) — classifies by rating (>=4: positive, =3: neutral, <3: negative) and creates batches (max 50 per batch). Loop Over Batches — feeds each sentiment batch to the AI separately for cleaner signals. Analyze Survey Batch (AI Agent) — returns structured JSON: themes, insights, recommendations. Add Metadata (Code) — attaches original sentiment and item counts to each AI result. Aggregate Results (Code) — merges all batches; outputs Top Themes, Key Insights, Priority Recommendations, and an Executive Summary. Save to Sheet / Slack — appends the summary to a sheet and posts highlights to Slack. Data Assumptions (Columns) Your source should include at least: 満足度 (Rating) — integer 1–5 自由記述コメント (Comment) — string 回答日時 (Timestamp) — ISO string or date Outputs Consolidated summary** containing: Top themes (with example quotes) Key insights Priority recommendations Executive summary Destinations**: Google Sheets (one row per run) and Slack (high-level highlights) Customize Adjust sentiment thresholds (e.g., require >=5 for positive) or batch size (default 50) in the Code node. Tailor the AI prompt or the output JSON schema to your domain. Add more outputs (CSV export, database insert, additional channels) in parallel after the Aggregate step. Before You Run (Checklist) [ ] Add credentials for Sheets / AI / Slack in Credentials [ ] Update documentId, sheetName, and Slack channelId [ ] Confirm your column names match the Code node references [ ] Verify schedule time and timezone (e.g., Asia/Tokyo) Troubleshooting Parser errors on AI output: ensure the model response is **JSON-only; reduce temperature or simplify schema if needed. Only some batches run**: check batch size in Loop and ensure each sentiment bucket actually contains responses. No output to Sheets/Slack**: verify credentials, IDs, and required fields; confirm permissions. Security & Template Notes Do not include credentials in the template file. Users add their own after import. Use Sticky Notes to document Overview, Setup, Processing Logic, and Output choices. This template already includes guideline-friendly notes.
by Br1
Load Jira open issues with comments into Pinecone + RAG Agent (Direct Tool or MCP) Who’s it for This workflow is designed for support teams, data engineers, and AI developers who want to centralize Jira issue data into a vector database. It collects open issues and their associated comments, converts them into embeddings, and loads them into Pinecone for semantic search, retrieval-augmented generation (RAG), or AI-powered support bots. It’s also published as an MCP tool, so external applications can query the indexed issues directly. How it works The workflow automates Jira issue extraction, comment processing, and vector storage in Pinecone. Importantly, the Pinecone index is recreated at every run so that it always reflects the current set of unresolved tickets. Trigger – A schedule trigger runs the workflow at defined times (e.g., 8, 11, 14, and 17 on weekdays). Issue extraction with pagination – Calls the Jira REST API to fetch open issues matching a JQL query (unresolved cases created in the last year). Pagination is fully handled: issues are retrieved in batches of 25, and the workflow continues iterating until all open issues are loaded. Data transformation – Extracts key fields (issue ID, key, summary, description, product, customer, classification, status, registration date). Comments integration – Fetches all comments for each issue, filters out empty/irrelevant ones (images, dots, empty markdown), and merges them with the issue data. Text cleaning – Converts HTML descriptions into clean plain text for processing. Embedding generation – Uses the OpenAI Embeddings node to vectorize text. Vector storage with index recreation – Loads embeddings and metadata into Pinecone under the jira namespace and the openissues index. The namespace is cleared at every run to ensure the index contains only unresolved tickets. Document chunking – Splits long issue texts into smaller chunks (512 tokens, 50 overlap) for better embedding quality. MCP publishing – Exposes the Pinecone index as an MCP tool (openissues), enabling external systems to query Jira issues semantically. How to set up Jira – Configure a Jira account and generate a token. Update the Jira node with credentials and adjust the JQL query if needed. OpenAI – Set up an OpenAI API key for embeddings. Configure embedding dimensions (default: 512). Pinecone – Create an index (e.g., openissues) with matching dimensions (512). Configure Pinecone API credentials and namespace (jira). The index will be cleared automatically at every run before reloading unresolved issues. Schedule – Adjust the cron expression in the Schedule Trigger to fit your update frequency. Optional MCP – If you want to query Jira issues via MCP, configure the MCP trigger and tool nodes. Requirements Jira account with API access and permissions to read issues and comments. OpenAI API key with access to the embedding model. Pinecone account with an index created (dimensions = 512). n8n instance with credentials set up for Jira, OpenAI, and Pinecone. How to customize the workflow JQL query**: Modify it to control which issues are extracted (e.g., by project, type, or time window). Pagination size**: Adjust the maxResults parameter (default 25) if you want larger or smaller batches per iteration. Metadata fields**: Add or remove fields in the “Extract Relevant Info” code node. Chunk size**: Adjust chunk size/overlap in the Document Chunker for different embedding strategies. Embedding model**: Switch to a different embedding provider if preferred. Vector store**: Replace Pinecone with another supported vector database if needed. Downstream use**: Extend with notifications, dashboards, or AI assistants that consume the vector data. AI Chatbot for Jira open tickets with SLA insights Who’s it for This workflow is designed for commercial teams, customer support, and service managers who need quick, conversational access to unresolved Jira tickets. It enables them to check whether a client has open issues, see related details, and understand SLA implications without manually browsing Jira. How it works Chat interface** – Provides a web-based chat that team members can use to ask natural language questions such as: “Are there any issues from client ACME?” “Do we have tickets that have been open for a long time?” AI Agent** – Powered by OpenAI, it interprets questions and queries the Pinecone vector store (openissues index, jira namespace). Memory** – Maintains short-term chat history for more natural conversations. Ticket retrieval** – Uses Pinecone embeddings (dimension = 512) to fetch unresolved tickets enriched with metadata: Issue key, description, customer, product, severity color, status, AM contract type, and SLA. SLA integration** – Service levels (Basic, Advanced, Full Service, with optional Fast Support) are provided via the SLA node. The agent explains which SLA applies based on ticket severity, registration date, and contract type. AI response** – Returns a friendly, collaborative summary of all tickets found, including: Ticket identifier Description Customer and product Severity level (Red, Yellow, Green, White) Ticket status Contract level and SLA explanation Setup Configure Jira → Pinecone index (openissues, 512 dimensions) already populated with unresolved tickets. Provide OpenAI API credentials. Ensure the SLA node includes the correct service-level definitions. Adjust chat branding (title, subtitle, CSS) if desired. Requirements Jira account with API access. Pinecone account with an index (openissues, dimensions = 512). OpenAI API key. n8n instance with LangChain and chatTrigger nodes enabled. How to customize Change the SLA node text if your service levels differ. Adjust the chat interface design (colors, title, subtitle). Expand metadata in Pinecone (e.g., add project type, priority, or assigned team). Train with additional examples in the system message to refine AI behavior.
by Angel Menendez
Enhance Security Operations with the Venafi Slack CertBot! Venafi Presentation - Watch Video Our Venafi Slack CertBot is strategically designed to facilitate immediate security operations directly from Slack. This tool allows end users to request Certificate Signing Requests that are automatically approved or passed to the Secops team for manual approval depending on the Virustotal analysis of the requested domain. Not only does this help centralize requests, but it helps an organization maintain the security certifications by allowing automated processes to log and analyze requests in real time. Workflow Highlights: Interactive Modals**: Utilizes Slack modals to gather user inputs for scan configurations and report generation, providing a user-friendly interface for complex operations. Dynamic Workflow Execution**: Integrates seamlessly with Venafi to execute CSR generation and if any issues are found, AI can generate a custom report that is then passed to a slack teams channel for manual approval with the press of a single button. Operational Flow: Parse Webhook Data**: Captures and parses incoming data from Slack to understand user commands accurately. Execute Actions**: Depending on the user's selection, the workflow triggers other actions within the flow like automatic Virustotal Scanning. Respond to Slack**: Ensures that every interaction is acknowledged, maintaining a smooth user experience by managing modal popups and sending appropriate responses. Setup Instructions: Verify that Slack and Qualys API integrations are correctly configured for seamless interaction. Customize the modal interfaces to align with your organization's operational protocols and security policies. Test the workflow to ensure that it responds accurately to Slack commands and that the integration with Qualys is functioning as expected. Need Assistance? Explore Venafi's Documentation or get help from the n8n Community for more detailed guidance on setup and customization. Deploy this bot within your Slack environment to significantly enhance the efficiency and responsiveness of your security operations, enabling proactive management of CSR's.
by Naser Emad
Create sophisticated shortened URLs with advanced targeting and tracking capabilities using the BioURL.link API Overview This n8n workflow provides a powerful webhook-based URL shortening service that integrates with BioURL.link's comprehensive link management platform. Transform any long URL into a smart, trackable short link with advanced targeting features through a simple HTTP POST request. Key Features Smart URL Shortening**: Create custom short URLs with optional passwords and expiry dates Geo-targeting**: Redirect users to different URLs based on their geographic location Device Targeting**: Serve different content to iPhone, Android, or other device users Language Targeting**: Customize redirects based on user language preferences Deep Linking**: Automatically redirect mobile users to app store or native apps Custom Meta Tags**: Set custom titles, descriptions, and images for social media sharing Analytics & Tracking**: Integrate tracking pixels and campaign parameters Splash Pages**: Create branded landing pages before redirects How It Works Webhook Trigger: Send a POST request to the /shorten-link endpoint API Integration: Workflow processes the request and calls BioURL.link's API Response: Returns the shortened URL with all configured features Setup Requirements BioURL.link account and API key Replace "Bearer YOURAPIKEY" in the HTTP Request node with your actual API key Customize the JSON body parameters as needed for your use case Use Cases Marketing campaigns with geo-specific landing pages Mobile app promotion with automatic app store redirects A/B testing with device or language-based targeting Social media sharing with custom preview metadata Time-sensitive promotions with expiry dates Password-protected internal links Technical Details Trigger**: Webhook (POST to /shorten-link) API Endpoint**: https://biourl.link/api/url/add Response**: Complete shortened URL data Authentication**: Bearer token required Perfect for marketers, developers, and businesses needing advanced URL management capabilities beyond basic shortening services.
by Lucas Peyrin
Overview This template provides a powerful and configurable utility to convert JSON data into a clean, well-structured XML format. It is designed for developers, data analysts, and n8n users who need to interface with legacy systems, generate structured reports, or prepare data for consumption by Large Language Models (LLMs), which often exhibit improved understanding and parsing with XML-formatted input. Use Cases This workflow is ideal for solving several common data transformation problems: Preparing Data for AI Prompts:** LLMs like GPT-4 often parse XML more reliably than JSON within a prompt. The explicit closing tags and hierarchical nature of XML reduce ambiguity, leading to better and more consistent responses from the AI. Interfacing with Legacy Systems:** Many enterprise systems, SOAP APIs, and older software exclusively accept or produce data in XML format. This template acts as a bridge, allowing modern JSON-based services to communicate with them seamlessly. Generating Structured Reports:** Create XML files for reporting or data interchange standards that require a specific, well-defined structure. Improving Data Readability:** For complex nested data, a well-formatted XML can be easier for humans to read and debug than a compact JSON string. How it works This workflow acts as a powerful, configurable JSON to XML converter. It takes a JSON object as input and performs the following steps: Recursively Parses JSON: It intelligently navigates through the entire JSON structure, including nested objects and arrays. Handles Data Types: Primitive Arrays (e.g., ["a", "b", "c"]) are joined into a single string with a safe delimiter. Complex Arrays (of objects) are converted into indexed XML tags (<0>, <1>, etc.). Dates are automatically detected and formatted into a readable YYYY-MM-DD HH:mm:ss format. Generates XML String: It constructs a final XML string based on the logic and configuration set inside the Code node. The output is provided in a single xml field, ready for use. Set up steps Setup time: ~1 minute This workflow is designed to be used as a sub-workflow (or "child workflow"). In your main workflow, add an Execute Workflow node. In the Workflow parameter of that node, select this "JSON to XML Converter" workflow. That's it! You can now send JSON data to the Execute Workflow node and it will return the converted XML string in the xml field. Customization Options The true power of this template lies in its customizability, all managed within the configuration section at the top of the Code node. This allows you to fine-tune the output XML to your exact needs. REMOVE_EMPTY_VALUES**: Set to true (default) to completely omit tags for null, undefined, or empty string values, resulting in a cleaner XML. Set to false to include empty tags like <myTag></myTag>. Newline Formatting**: Control the spacing and readability of the output with four distinct settings: NEWLINES_TOP_LEVEL: Adjusts the newlines between root-level elements. NEWLINES_ARRAY_ITEMS: Controls spacing between items in a complex array (e.g., between <0> and <1>). NEWLINES_OBJECT_PROPERTIES: Manages newlines between the properties of an object. NEWLINES_WITHIN_TAGS: Adds newlines between an opening/closing tag and its content for an "indented" look. Prerequisites An active n8n instance. Basic familiarity with JSON and XML data structures. Understanding of how to use the Execute Workflow node to run sub-workflows.
by David Ashby
Complete MCP server exposing all Mailcheck Tool operations to AI agents. Zero configuration needed - 1 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Mailcheck Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Mailcheck Tool tool with full error handling 📋 Available Operations (1 total) Every possible Mailcheck Tool operation is included: 🔧 Email (1 operations) • Check an email 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Mailcheck Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Mailcheck Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing all LinkedIn Tool operations to AI agents. Zero configuration needed - all 1 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every LinkedIn Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n LinkedIn Tool tool with full error handling 📋 Available Operations (1 total) Every possible LinkedIn Tool operation is included: 🔧 Post (1 operations) • Create a post 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native LinkedIn Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every LinkedIn Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing all Google Translate Tool operations to AI agents. Zero configuration needed - all 1 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Google Translate Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Google Translate Tool tool with full error handling 📋 Available Operations (1 total) Every possible Google Translate Tool operation is included: 🔧 Language (1 operations) • Translate a language 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Google Translate Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Google Translate Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
🛠️ Google Cloud Natural Language Tool MCP Server Complete MCP server exposing all Google Cloud Natural Language Tool operations to AI agents. Zero configuration needed - all 1 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Google Cloud Natural Language Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Google Cloud Natural Language Tool tool with full error handling 📋 Available Operations (1 total) Every possible Google Cloud Natural Language Tool operation is included: 🔧 Document (1 operations) • Analyze sentiment 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Google Cloud Natural Language Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Google Cloud Natural Language Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing all Google Ads Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Google Ads Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Google Ads Tool tool with full error handling 📋 Available Operations (2 total) Every possible Google Ads Tool operation is included: 📢 Campaign (2 operations) • Get many campaigns • Get a campaign 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Google Ads Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Google Ads Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.