by Supira Inc.
How It Works This workflow automates social media content creation and posting. It starts by receiving raw text input through a Webhook (for example, from LINE Bot) and saves the content into a Notion database for centralized storage. Next, GPT-4 generates platform-specific captions for Instagram, Threads, X/Twitter, and Blog. Instagram captions are prepared for automatic publishing, while Threads, X, and Blog drafts are stored in Notion for later review and manual posting. The workflow then fetches book cover images or other visuals from external APIs such as Google Books, OpenBD, or OpenLibrary. The chosen image is uploaded to Cloudinary to generate a secure, optimized URL. Finally, the Instagram Graph API is used to create a media container and publish the post automatically with the caption and image. This way, the workflow provides both automation for Instagram and reusable drafts for other platforms. Requirements Notion account with a database configured for text and captions Cloudinary account for image hosting Instagram Business account connected to the Meta Developer Platform GPT-4 (via OpenAI or LangChain node in n8n) Setup Instructions Configure the Webhook node to capture text input. Update the Notion database ID and property keys to match your schema. Add Cloudinary credentials (【cloud_name】, 【upload_preset】) in the HTTP Request node. Set 【IG_ACCESS_TOKEN】 as an environment variable. Activate the workflow and test with a sample input. Customization Adjust caption prompts for style, hashtags, or character limits. Add additional GPT nodes for more platforms. Replace or extend image sources as needed. Integrate a scheduler (Cron node) to post at specific times.
by Mauricio Perera
📁 Analyze uploaded images, videos, audio, and documents with specialized tools — powered by a lightweight language-only agent. 🧭 What It Does This workflow enables multimodal file analysis using Google Gemini tools connected to a text-only LLM agent. Users can upload images, videos, audio files, or documents via a chat interface. The workflow will: Upload each file to Google Gemini and obtain an accessible URL. Dynamically generate contextual prompts based on the file(s) and user message. Allow the agent to invoke Gemini tools for specific media types as needed. Return a concise, helpful response based on the analysis. 🚀 Use Cases Customer support**: Let users upload screenshots, documents, or recordings and get helpful insights or summaries. Multimedia QA**: Review visual, audio, or video content for correctness or compliance. Educational agents**: Interpret content from PDFs, diagrams, or audio recordings on the fly. Low-cost multimodal assistants: Achieve multimodal functionality **without relying on large vision-language models. 🎯 Why This Architecture Matters Unlike end-to-end multimodal LLMs (like Gemini 1.5 or GPT-4o), this template: Uses a text-only LLM (Qwen 32B via Groq) for reasoning. Delegates media analysis to specialized Gemini tools. ✅ Advantages | Feature | Benefit | | ----------------------- | --------------------------------------------------------------------- | | 🧩 Modular | LLM + Tools are decoupled; can update them independently | | 💸 Cost-Efficient | No need to pay for full multimodal models; only use tools when needed | | 🔧 Tool-based Reasoning | Agent invokes tools on demand, just like OpenAI’s Toolformer setup | | ⚡ Fast | Groq LLMs offer ultra-fast responses with low latency | | 📚 Memory | Includes context buffer for multi-turn chats (15 messages) | 🧪 How It Works 🔹 Input via Chat Users submit a message and (optionally) files via the chatTrigger. 🔹 File Handling If no files: prompt is passed directly to the agent. If files are included: Files are split, uploaded to Gemini (to get public URLs). Metadata (name, type, URL) is collected and embedded into the prompt. 🔹 Prompt Construction A new chatInput is dynamically generated: User message Media: [array of file data] 🔹 Agent Reasoning The Langchain Agent receives: The enriched prompt File URLs Memory context (15 turns) Access to 4 Gemini tools: IMG: analyze image VIDEO: analyze video AUDIO: analyze audio DOCUMENT: analyze document The agent autonomously decides whether and how to use tools, then responds with concise output. 🧱 Nodes & Services | Category | Node / Tool | Purpose | | --------------- | ---------------------------- | ------------------------------------- | | Chat Input | chatTrigger | User interface with file support | | File Processing | splitOut, splitInBatches | Process each uploaded file | | Upload | googleGemini | Uploads each file to Gemini, gets URL | | Metadata | set, aggregate | Builds structured file info | | AI Agent | Langchain Agent | Receives context + file data | | Tools | googleGeminiTool | Analyze media with Gemini | | LLM | lmChatGroq (Qwen 32B) | Text reasoning, high-speed | | Memory | memoryBufferWindow | Maintains session context | ⚙️ Setup Instructions 1. 🔑 Required Credentials Groq API key** (for Qwen 32B model) Google Gemini API key** (Palm / Gemini 1.5 tools) 2. 🧩 Nodes That Need Setup Replace existing credentials on: Upload a file Each GeminiTool (IMG, VIDEO, AUDIO, DOCUMENT) lmChatGroq 3. ⚠️ File Size & Format Considerations Some Gemini tools have file size or format restrictions. You may add validation nodes before uploading if needed. 🛠️ Optional Improvements Add logging and error handling (e.g., for upload failures). Add MIME-type filtering to choose the right tool explicitly. Extend to include OCR or transcription services pre-analysis. Integrate with Slack, Telegram, or WhatsApp for chat delivery. 🧪 Example Use Case > "Hola, ¿qué dice este PDF?" Uploads a document → Agent routes it to Gemini DOCUMENT tool → Receives extracted content → LLM summarizes it in Spanish. 🧰 Tags multimodal, agent, langchain, groq, gemini, image analysis, audio analysis, document parsing, video analysis, file uploader, chat assistant, LLM tools, memory, AI tools 📂 Files This template is ready to use as-is in n8n. No external webhooks or integrations required.
by David Ashby
🛠️ Pipedrive Tool MCP Server Complete MCP server exposing all Pipedrive Tool operations to AI agents. Zero configuration needed - all 45 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Pipedrive Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Pipedrive Tool tool with full error handling 📋 Available Operations (45 total) Every possible Pipedrive Tool operation is included: 🔧 Activity (5 operations) • Create an activity • Delete an activity • Get an activity • Get many activities • Update an activity 💰 Deal (7 operations) • Create a deal • Delete a deal • Duplicate a deal • Get a deal • Get many deals • Search a deal • Update a deal 🔧 Dealactivity (1 operations) • Get many deal activities 🔧 Dealproduct (4 operations) • Add a deal product • Get many deal products • Remove a deal product • Update a deal product 📄 File (5 operations) • Create a file • Delete a file • Download a file • Get a file • update details of a file 🔧 Lead (5 operations) • Create a lead • Delete a lead • Get a lead • Get many leads • Update a lead 🔧 Note (5 operations) • Create a note • Delete a note • Get a note • Get many notes • Update a note 🏢 Organization (6 operations) • Create an organization • Delete an organization • Get an organization • Get many organizations • Search an organization • Update an organization 👥 Person (6 operations) • Create a person • Delete a person • Get a person • Get many people • Search a person • Update a person 🔧 Product (1 operations) • Get many products 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Pipedrive Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Pipedrive Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Baptiste Fort
Who is it for? This workflow is perfect for anyone who wants to: Automatically collect contacts from Google Maps**: emails, phone numbers, websites, social media (LinkedIn, Facebook), city, ratings, and reviews. Organize everything neatly in Airtable**, without dealing with messy CSV exports that cause headaches. Send a personalized email to each lead**, without writing it or hitting “send” yourself. 👉 In short, it’s the perfect tool for marketing agencies, freelancers in prospecting, or sales teams tired of endless copy-paste. If you want to automate manual tasks, visit our French agency 0vni – Agence automatisation. How does it work? Here’s the pipeline: Scrape Google Maps with Apify (business name, email, website, phone, LinkedIn, Facebook, city, rating, etc.). Clean and map the data so everything is well-structured (Company, Email, Phone, etc.). Send everything into Airtable to build a clear, filterable database. Trigger an automatic email via Gmail, personalized for each lead. 👉 The result: a real prospecting machine for local businesses. What you need before starting ✅ An Apify account (for Google Maps scraping). ✅ An Airtable account with a prepared base (see structure below). ✅ A Gmail account (to send automatic emails). Airtable Base Structure Your table should contain the following columns: | Company | Email | Phone Number | Website | LinkedIn | Facebook | City | Category | Google Maps Reviews | Google Maps Link | | ------- | ---------------------------------------- | ----------------- | -------------------------------------------- | -------------- | -------------- | ---------------- | ---------------- | ------------------- | ----------------- | | 4 As | contact@4-as.fr | +33 1 89 36 89 00 | https://www.4-as.fr/ | linkedin.com/… | facebook.com/… | 94100 Saint-Maur | training, center | 48 reviews / 5 ★ | maps.google.com/… | Detailed Workflow Steps Step 1 – GO Trigger Node**: Manual Trigger Purpose**: Start the workflow manually. 👉 You can replace this trigger with a Webhook (to launch the flow via a URL) or a Cron (to run it automatically on a schedule). Step 2 – Scrape Google Maps Node**: HTTP Request Method**: POST Where to find the Apify URL? Go to Google Maps Email Leads Fast Scraper Click on API (top right) Open API Endpoints Copy the URL of the 3rd option: Run Actor synchronously and get dataset items 👉 This URL already includes your Apify API token. Body Content Type: JSON Body JSON (example)**: Body Content Type**: JSON Body JSON (example)**: *{ "area_height": 10, "area_width": 10, "emails_only": true, "gmaps_url": "https://www.google.com/maps/search/training+centers+near+Amiens/", "max_results": 200, "search_query": "training center" }* Step 3 – Wait Node**: Wait Purpose**: Give the scraper enough time to return data. Recommended delay*: *10 seconds (adjust if needed). 👉 This ensures that Apify has finished processing before we continue. Step 4 – Mapping Node**: Set Purpose**: Clean and reorganize the raw dataset into structured fields that match Airtable columns. Assignments (example): Company = {{ $json.name }} Email = {{ $json.email }} Phone = {{ $json.phone_number }} Website = {{ $json.website_url }} LinkedIn = {{ $json.linkedin }} Facebook = {{ $json.facebook }} City = {{ $json.city }} Category = {{ $json.google_business_categories }} Google Maps Reviews = {{ $json.reviews_number }} reviews, rating {{ $json.review_score }}/5 Google Maps Link = {{ $json.google_maps_url }} 👉 Result: The data is now clean and ready for Airtable. Step 5 – Airtable Storage Node**: Airtable → Create Record Parameters**: Credential to connect with: Airtable Personal Access Token account Resource: Record Operation: Create Base: Select from list → your base (example: GOOGLE MAPS SCRAPT) Table: Select from list → your table (example: Google maps scrapt) Mapping Column Mode: Map Each Column Manually 👉 To get your Base ID and Table ID, open your Airtable in the browser: https://airtable.com/appA6eMHOoquiTCeO/tblZFszM5ubwwSYDK Here: Base ID = appA6eMHOoquiTCeO Table ID = tblZFszM5ubwwSYDK Authentication Go to: https://airtable.com/create/tokens Create a new Personal Access Token Give it access to the correct base Copy the token into n8n credentials (select Airtable Personal Access Token). Field Mapping (example) Company: {{ $json['Company'] }} Email: {{ $json.Email }} Phone: {{ $json['Phone'] }} Website: {{ $json['Website'] }} LinkedIn: {{ $json.LinkedIn }} Facebook: {{ $json.Facebook }} City: {{ $json.City }} Category: {{ $json['Category'] }} Google Maps Reviews: {{ $json['Google Maps Reviews'] }} Google Maps Link: {{ $json['Google Maps Link'] }} 👉 Result: Each lead scraped from Google Maps is automatically saved into Airtable, ready to be filtered, sorted, or used for outreach. Step 6 – Automatic Email Node**: Gmail → Send Email Parameters**: To: = {{ $json.fields.Email }} Subject: = {{ $json.fields['Company'] }} Message: HTML email with dynamic lead details. Example HTML message: Hello {{ $json.fields['Company'] }} team, I design custom automations for training centers. Goal: zero repetitive manual tasks, from registration to invoicing. Details: {{ $json.fields['Company'] }} in {{ $json.fields.City }} — website: {{ $json.fields['Website'] }} — {{ $json.fields['Google Maps Reviews'] }} Interested in a quick 15-min call to see a live demo? 👉 Result: Each contact receives a fully personalized email with their company name, city, website, and Google Maps rating. Final Result With just one click: Scrape Google Maps (Apify). Clean and structure the data (Set). Save everything into Airtable. Send personalized emails via Gmail. 👉 All without copy-paste, without CSV, and without Excel headaches.
by David Ashby
Need help? Want access to this workflow + many more paid workflows + live Q&A sessions with a top verified n8n creator? Join the community Complete MCP server exposing 23 AWS Budgets API operations to AI agents. ⚡ Quick Setup Import this workflow into your n8n instance Credentials Add AWS Budgets credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the AWS Budgets API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://budgets.amazonaws.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (23 total) 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createbudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateBudget: Creates a budget and, if included, notifications and subscribers. <important> Only one of BudgetLimit or PlannedBudgetLimits can be present in the syntax at one time. Use the syntax that matches your case. The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createbudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateBudgetAction: Creates a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createnotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateNotification: Creates a notification. You must create the budget before you create the associated notification. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Createsubscriber (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.CreateSubscriber: Creates a subscriber. You must create the associated budget and notification before you create the subscriber. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletebudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteBudget: Deletes a budget. You can delete your budget at any time. <important> Deleting a budget also deletes the notifications and subscribers that are associated with that budget. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteBudgetAction: Deletes a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletenotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteNotification: Deletes a notification. <important> Deleting a notification also deletes the subscribers that are associated with the notification. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Deletesubscriber (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DeleteSubscriber: Deletes a subscriber. <important> Deleting the last subscriber to a notification also deletes the notification. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudget: Describes a budget. <important> The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetAction: Describes a budget action detail. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetactionhistories (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetActionHistories: Describes a budget action history detail. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetactionsforaccount (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetActionsForAccount: Describes all of the budget actions for an account. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetactionsforbudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetActionsForBudget: Describes all of the budget actions for a budget. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetnotificationsforaccount (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetNotificationsForAccount: Lists the budget names and notifications that are associated with an account. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgetperformancehistory (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgetPerformanceHistory: Describes the history for DAILY, MONTHLY, and QUARTERLY budgets. Budget history isn't available for ANNUAL budgets. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describebudgets (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeBudgets: Lists the budgets that are associated with an account. <important> The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describenotificationsforbudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeNotificationsForBudget: Lists the notifications that are associated with a budget. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Describesubscribersfornotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.DescribeSubscribersForNotification: Lists the subscribers that are associated with a notification. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Executebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.ExecuteBudgetAction: Executes a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatebudget (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateBudget: Updates a budget. You can change every part of a budget except for the budgetName and the calculatedSpend. When you modify a budget, the calculatedSpend drops to zero until Amazon Web Services has new usage data to use for forecasting. <important> Only one of BudgetLimit or PlannedBudgetLimits can be present in the syntax at one time. Use the syntax that matches your case. The Request Syntax section shows the BudgetLimit syntax. For PlannedBudgetLimits, see the Examples section. </important> 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatebudgetaction (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateBudgetAction: Updates a budget action. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatenotification (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateNotification: Updates a notification. 🔧 #X-Amz-Target=Awsbudgetservicegateway.Updatesubscriber (1 endpoints) • POST /#X-Amz-Target=AWSBudgetServiceGateway.UpdateSubscriber: Updates a subscriber. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native AWS Budgets API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by keisha kalra
Try It Out! This n8n template helps you analyze Google Maps reviews for a list of restaurants, summarize them with AI, and identify optimization opportunities—all in one automated workflow. Whether you're managing multiple locations, helping local restaurants improve their digital presence, or conducting a competitor analysis, this workflow helps you extract insights from dozens of reviews in minutes. How It Works? Start with a pre-filled list of restaurants in Google Sheets. The workflow uses SerpAPI to scrape Google Maps reviews for each listing. Reviews with content are passed to ChatGPT for summarization. Empty or failed reviews are logged in a separate tab for easy follow-up. Results are stored back in your Google Sheet for analysis or sharing How To Use Customize the input list in Google Sheets with your own restaurants. Update the OpenAI prompt if you want a different style of summary. You can trigger this manually or swap in a schedule, webhook, or other event. Requirements A SerpAPI account to fetch reviews An OpenAI account for ChatGPT summarization Access to Google Sheets and n8n Who Is It For? This is helpful for people looking to analyze a large batch of Google reviews in a short amount of time. Additionally, it can be used to compare restaurants and see where each can be optimized. How To Set-Up? Use a SerpAPI endpoint to include in the HTTP request node. Refer to this n8n documentation for more help! https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolserpapi/. Happy Automating!
by David Ashby
Complete MCP server exposing 21 api.clarify.io API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add api.clarify.io credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the api.clarify.io API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.clarify.io/ • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (21 total) 🔧 V1 (21 endpoints) • GET /v1/bundles: Add Media to Track • POST /v1/bundles: Create a bundle • DELETE /v1/bundles/{bundle_id}: Delete a bundle • GET /v1/bundles/{bundle_id}: Get a bundle • PUT /v1/bundles/{bundle_id}: Update a bundle • GET /v1/bundles/{bundle_id}/insights: Get bundle insights • POST /v1/bundles/{bundle_id}/insights: Request an insight to be run • GET /v1/bundles/{bundle_id}/insights/{insight_id}: Get bundle insight • DELETE /v1/bundles/{bundle_id}/metadata: Delete bundle metadata • GET /v1/bundles/{bundle_id}/metadata: Get bundle metadata • PUT /v1/bundles/{bundle_id}/metadata: Update bundle metadata • DELETE /v1/bundles/{bundle_id}/tracks: Delete bundle tracks • GET /v1/bundles/{bundle_id}/tracks: Get bundle tracks • POST /v1/bundles/{bundle_id}/tracks: Add a track for a bundle • PUT /v1/bundles/{bundle_id}/tracks: Update a tracks for a bundle • DELETE /v1/bundles/{bundle_id}/tracks/{track_id}: Delete a bundle track • GET /v1/bundles/{bundle_id}/tracks/{track_id}: Get bundle track • PUT /v1/bundles/{bundle_id}/tracks/{track_id}: Add media to a track • GET /v1/reports/scores: Generate Group Report • GET /v1/reports/trends: Generate Trends Report • GET /v1/search: Search Bundles 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native api.clarify.io API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 18 Bufferapp API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Bufferapp credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Bufferapp API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.bufferapp.com/1/ • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (18 total) 🔧 Info (1 endpoints) • GET /info/configuration{mediaTypeExtension}: Get Configuration 🔧 Links (1 endpoints) • GET /links/shares{mediaTypeExtension}: Get Link Shares 🔧 Profiles (7 endpoints) • POST /profiles/{id}/schedules/update{mediaTypeExtension}: Update Profile Schedules • GET /profiles/{id}/schedules{mediaTypeExtension}: Get Profile Schedules • GET /profiles/{id}/updates/pending{mediaTypeExtension}: Get Pending Updates • POST /profiles/{id}/updates/reorder{mediaTypeExtension}: Reorder Profile Updates • GET /profiles/{id}/updates/sent{mediaTypeExtension}: Get Sent Updates • POST /profiles/{id}/updates/shuffle{mediaTypeExtension}: Shuffle Profile Updates • GET /profiles/{id}{mediaTypeExtension}: Get Profile Details 🔧 Profiles{Mediatypeextension} (1 endpoints) • GET /profiles{mediaTypeExtension}: List Profiles 🔧 Updates (7 endpoints) • POST /updates/create{mediaTypeExtension}: Create Status Update • POST /updates/{id}/destroy{mediaTypeExtension}: Delete Status Update • GET /updates/{id}/interactions{mediaTypeExtension}: Get Update Interactions • POST /updates/{id}/move_to_top{mediaTypeExtension}: Move Update to Top • POST /updates/{id}/share{mediaTypeExtension}: Share Update Now • POST /updates/{id}/update{mediaTypeExtension}: Edit Status Update • GET /updates/{id}{mediaTypeExtension}: Get Update Details 🔧 User{Mediatypeextension} (1 endpoints) • GET /user{mediaTypeExtension}: Get User Details 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Bufferapp API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 27 Amazon CloudWatch Application Insights API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Amazon CloudWatch Application Insights credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Amazon CloudWatch Application Insights API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://applicationinsights.{region}.amazonaws.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (27 total) 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateApplication: Adds an application that is created from a resource group. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createcomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateComponent: Creates a custom component by grouping similar standalone instances to monitor. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createlogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateLogPattern: Adds an log pattern to a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deleteapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteApplication: Removes the specified application from monitoring. Does not delete the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deletecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteComponent: Ungroups a custom component. When you ungroup custom components, all applicable monitors that are set up for the component are removed and the instances revert to their standalone status. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deletelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteLogPattern: Removes the specified log pattern from a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeApplication: Describes the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponent: Describes a component and lists the resources that are grouped together in a component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponentconfiguration (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponentConfiguration: Describes the monitoring configuration of the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponentconfigurationrecommendation (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponentConfigurationRecommendation: Describes the recommended monitoring configuration of the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeLogPattern: Describe a specific log pattern from a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeobservation (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeObservation: Describes an anomaly or error with the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeproblem (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeProblem: Describes an application problem. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeproblemobservations (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeProblemObservations: Describes the anomalies or errors associated with the problem. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listapplications (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListApplications: Lists the IDs of the applications that you are monitoring. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listcomponents (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListComponents: Lists the auto-grouped, standalone, and custom components of the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listconfigurationhistory (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListConfigurationHistory: Lists the INFO, WARN, and ERROR events for periodic configuration updates performed by Application Insights. Examples of events represented are: INFO: creating a new alarm or updating an alarm threshold. WARN: alarm not created due to insufficient data points used to predict thresholds. ERROR: alarm not created due to permission errors or exceeding quotas. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listlogpatternsets (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListLogPatternSets: Lists the log pattern sets in the specific application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listlogpatterns (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListLogPatterns: Lists the log patterns in the specific log LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listproblems (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListProblems: Lists the problems with your application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listtagsforresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListTagsForResource: Retrieve a list of the tags (keys and values) that are associated with a specified application. A tag is a label that you optionally define and associate with an application. Each tag consists of a required tag key and an optional associated tag value. A tag key is a general label that acts as a category for more specific tag values. A tag value acts as a descriptor within a tag key. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Tagresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.TagResource: Add one or more tags (keys and values) to a specified application. A tag is a label that you optionally define and associate with an application. Tags can help you categorize and manage application in different ways, such as by purpose, owner, environment, or other criteria. Each tag consists of a required tag key and an associated tag value, both of which you define. A tag key is a general label that acts as a category for more specific tag values. A tag value acts as a descriptor within a tag key. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Untagresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UntagResource: Remove one or more tags (keys and values) from a specified application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updateapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateApplication: Updates the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateComponent: Updates the custom component name and/or the list of resources that make up the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatecomponentconfiguration (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateComponentConfiguration: Updates the monitoring configurations for the component. The configuration input parameter is an escaped JSON of the configuration and should match the schema of what is returned by DescribeComponentConfigurationRecommendation. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateLogPattern: Adds a log pattern to a LogPatternSet. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Amazon CloudWatch Application Insights API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Rahul Joshi
📘 Description: This workflow automates developer Q&A handling by connecting GitHub, GPT-4o (Azure OpenAI), Notion, Google Sheets, and Slack. Whenever a developer comments on a pull request with a “how do I…” or “how to…” question, the workflow automatically detects the query, uses GPT-4o to generate a concise technical response, stores it in Notion for documentation, and instantly shares it on Slack for visibility. It reduces repetitive manual answering, boosts engineering knowledge sharing, and keeps teams informed with AI-powered insights. ⚙️ What This Workflow Does (Step-by-Step) 🟢 GitHub PR Comment Trigger — Starts the automation when a pull request comment is posted in a specified repository. Action: Listens for pull_request_review_comment events. Description: Captures comment text, author, PR number, and repository name as the trigger payload. 🔍 Validate GitHub Webhook Payload (IF Node) — Ensures the webhook data includes a valid comment URL. ✅ True Path: Continues to question detection. ❌ False Path: Sends invalid or missing data to Google Sheets for error logging. ❓ Detect Developer Question in PR Comment — Checks whether the comment includes question patterns such as “how do I…” or “how to…”. If a valid question is found, the workflow proceeds to the AI assistant; otherwise, it ends silently. 🧠 Configure GPT-4o Model (Azure OpenAI) — Connects to the GPT-4o model for intelligent language generation. Acts as the central AI engine to craft short, precise technical answers. 🤖 Generate AI Response for Developer Question — Sends the developer’s comment and PR context to GPT-4o. GPT analyzes the question and produces a short (2–3 line) helpful answer, maintaining professional and technical tone. 🧩 Extract GitHub Comment Metadata — Uses a JavaScript code node to structure key details (repo, user, comment, file path, PR number) into a clean JSON format. Prepares standardized data for storage and further use. 🧾 Save Comment Insight to Notion Database — Appends the GitHub comment, AI response, and metadata into a Notion database (“test db”). Acts as a centralized knowledge base for tracking and reusing AI-generated technical answers. 💬 Post AI Answer & PR Link to Slack — Sends the generated response and GitHub PR comment link to a Slack channel or user. Helps reviewers or teammates instantly view AI-generated suggestions and maintain active discussion threads. 🚨 Log Errors in Google Sheets (Error Handling) — Logs webhook validation or AI-processing errors into a shared Google Sheet (“error log sheet”). Ensures full visibility into workflow issues for future debugging. 🧩 Prerequisites GitHub OAuth credentials with webhook access Azure OpenAI (GPT-4o) account Notion API integration for the documentation database Slack API connection for notifications Google Sheets API access for error tracking 💡 Key Benefits ✅ Automated detection of developer questions in GitHub comments ✅ AI-generated instant answers with context awareness ✅ Centralized documentation in Notion for knowledge reuse ✅ Real-time Slack notifications for visibility and collaboration ✅ Continuous error logging for transparent troubleshooting 👥 Perfect For Developer teams using GitHub for code reviews Engineering leads wanting AI-assisted PR support Companies aiming to build self-learning documentation Teams using Notion and Slack for workflow visibility
by Nghia Nguyen
This AI Agent helps you create short links from your original URLs. Each generated short link is automatically stored in a database table for easy management and tracking. How It Works Provide a long URL to the Agent. The Agent saves your original link in the database. It generates a short link in the following format: Short link: https://{webhook_url}/webhook/shortLink?q={shortLinkId} When users open the short link, they are automatically redirected to your original link. How to Use Send your link to the Agent. The Agent will respond with a generated short link. Requirements Add your your_webhook_url to the Config Node. OpenAI account Create a database table named ShortLink with the following columns: | Column Name | Description | |----------------|------------------------------| | originalLink | Stores the full original URL. | | shortLinkId | Stores the unique short link ID. | Customization Options Add traffic tracking or analytics for each short link. Customize the redirect page to display your logo, message, or branding.
by DIGITAL BIZ TECH
AI Cost Estimation Chatbot (Conversational Dual-Agent + OCR Workflow) Overview This workflow introduces a conversational AI Cost Estimation Chatbot with built-in OCR document analysis and interactive form guidance. It helps users and teams handle pricing, measurement, and product configuration for multiple categories such as fabrics and tiles — whether data comes from an uploaded invoice, a stored RFQ, or live user input. The system blends Mistral AI’s reasoning with n8n’s native tools — OCR Extract, Calculator, Supabase, and Gmail — to deliver clear, step-by-step cost calculations. It automatically retrieves or parses OCR data, confirms details conversationally, performs unit conversions, and returns accurate estimates in real time. Escalation and recordkeeping are handled via Gmail and Supabase. Chatbot Flow Trigger: Chat message (from n8n Chat UI) or Webhook (from a live site). Model: Mistral Cloud Chat Model (mistral-medium-latest) Memory: Simple Memory (Buffer Window, 15-message history) Tools: OCR Extract:** Reads and converts invoices, receipts, and RFQs into structured data. Supabase:** Stores and retrieves OCR data for re-use in future calculations. Calculator:** Performs all material, area, and cost computations. Gmail:** Escalates customer queries or sends quote summaries. Agent:** ai agent cost estimate Workflow Behavior: Retrieves or parses OCR data, confirms and completes missing details interactively. Guides users step-by-step through product setup (Fabric or Tile). Calculates costs transparently using MATERIAL_COSTS and PROCESSING_COSTS. Handles GSM ↔ sqm, area, and weight conversions automatically. Escalates support or order confirmations via Gmail when requested. Integrations Used | Service | Purpose | |----------|----------| | * Chat* | User-facing chatbot interface | | OCR Extract | Processes uploaded documents or receipts | | Supabase | Stores and retrieves OCR / quote data | | Mistral AI | Chat model and reasoning engine | | Calculator | Handles all numeric and cost calculations | | Gmail | Sends escalations or quote summaries | Agent System Prompt Summary > “You are an AI cost estimation assistant for a brand. > Retrieve or parse OCR data from Supabase, confirm details with the user, and calculate costs transparently. > Use the Calculator for all numeric logic based on MATERIAL_COSTS and PROCESSING_COSTS. > Handle GSM-to-sqm and other conversions automatically. > If support or follow-up is needed, send a message through Gmail. > Always guide the user conversationally, confirm assumptions, and explain every step clearly.” Key Features ✅ input: Chat Interface ✅ Conversational guidance even when OCR data doesnt exist ✅ OCR + Supabase integration for document reuse ✅ Interactive cost estimator for fabrics and tiles ✅ Transparent calculations and unit conversions ✅ Gmail integration for escalation or order confirmation ✅ Modular design for scaling to other product types Summary A powerful AI + OCR conversational cost estimation assistant that retrieves or parses order data, guides users through setup, and calculates costs transparently. It combines intelligence (Mistral), precision (Calculator), and automation (OCR + Supabase + Gmail) to create a complete, human-like quotation system — perfect for brands, manufacturers, and B2B platforms. 💡 We can help you set it up for free — from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digitalbiztech/ You can also DM us on LinkedIn for any help.