by Ilyass Kanissi
🤖 Simple RAG Customer Support Chatbot 📋 Overview This intelligent customer support chatbot leverages Retrieval-Augmented Generation (RAG) to provide accurate, contextual responses by combining your knowledge base with AI capabilities. The system automatically retrieves relevant documents from your Pinecone vector store and uses them to generate informed responses through OpenAI's language models. ⚡ Quick Setup Import Workflow Import this workflow template into your n8n instance Configure Credentials Add the following API credentials: OpenAI API Key: For chat completions and embeddings Pinecone API Key: For vector database operations Google Drive: For document auto ingestion Initialize Vector Store Use the "Insert documents into Pinecone" workflow to populate your knowledge base Activate Workflow Enable the main chat workflow to start receiving requests 🔧 How it Works Main Chat Flow (Agent Workflow) User Message → Memory Retrieval → Vector Search → Context Assembly → AI Response → Memory Update → Response Process Flow: Message Reception: Webhook receives user chat messages with session management Memory Retrieval: Loads conversation history for context continuity Semantic Search: Queries Pinecone vector store for relevant documents Context Assembly: Combines retrieved documents with conversation history AI Generation: OpenAI generates contextual response using assembled context Memory Storage: Updates conversation memory for future interactions Response Delivery: Returns formatted response to user interface Document Ingestion Flow Document Source → Text Extraction → Chunking → Embedding → Vector Storage Process Flow: Document Trigger: Google Drive or manual file upload detection Content Extraction: Extracts text from various file formats (PDF, DOC, TXT) Text Chunking: Splits documents into optimal chunks for embedding Embedding Generation: Creates vector embeddings using OpenAI Vector Storage: Stores embeddings in Pinecone with metadata Index Update: Updates search index for immediate availability
by Zakwan
📖 Overview This template automates the process of researching a keyword, scraping top-ranking articles, cleaning their content, and generating a high-quality SEO-optimized blog post. It uses Google Search via RapidAPI, Ollama with Mistral AI, and Google Drive to deliver an end-to-end automated content workflow. Ideal for content creators, SEO specialists, bloggers, and marketers who need to quickly gather and summarize insights from multiple sources to create superior content. ⚙️ Prerequisites Before using this workflow, make sure you have: n8n installed (Desktop, Docker, or Cloud). Ollama installed with the mistral:7b model: ollama pull mistral:7b RapidAPI account (for Google Search API). Google Drive account (with a target folder where articles will be saved). 🔑 Credentials Required RapidAPI (Google Search API) Header authentication with your API key. Example headers: x-rapidapi-key: YOUR_API_KEY x-rapidapi-host: google-search74.p.rapidapi.com Google Drive OAuth2 Allow read/write permissions. Update the folderId with your Drive folder where articles should be stored. Ollama API Base URL: http://localhost:11434 (local n8n) http://host.docker.internal:11434 (inside Docker) Ensure the mistral:7b model is available. 🚀 Setup Instructions Configure RapidAPI Sign up at RapidAPI . Subscribe to the Google Search API. Create an HTTP Header Auth credential in n8n with your API key. Configure Google Drive In n8n, add a Google Drive OAuth2 credential. Select the Drive folder ID where output files should be saved. Configure Ollama Install Ollama locally. Pull the required model (mistral:7b). Create an Ollama API credential in n8n. Run the Workflow Trigger by sending a chat message with your target keyword. The workflow searches Google, extracts the top 3 results, scrapes the articles, cleans the content, and generates a structured blog post. Final output is stored in Google Drive as a .docx file. 🎨 Customization Options Search Engine → Swap out RapidAPI with Bing or SerpAPI. Number of Articles → Change limit: 3 in the Google Search node. Content Cleaning → Modify the regex in the “Clean Body Text” node to capture or tags. AI Model → Replace mistral:7b with llama3, mixtral, or any other Ollama-supported model. Storage → Save output to a different Google Drive folder or export to Notion/Slack. 📌 Workflow Highlights Google Search (RapidAPI) → Fetch top 3 results for your keyword. HTTP Request + Code Nodes → Extract and clean article body text. Mistral AI via Ollama → Summarize, optimize, and refine the content. Google Drive → Save the final blog-ready article automatically.
by Cheng Siong Chin
How It Works The workflow starts with a scheduled trigger that activates at set intervals. Behavioral data from multiple sources is parsed and sent to the MCDN routing engine, which intelligently assigns leads to the right teams based on predefined rules. AI-powered scoring evaluates each prospect’s potential, ensuring high-quality leads are prioritized. The results are synced to the CRM, and updates are reflected on an analytics dashboard for real-time visibility. Setup Steps Trigger: Define schedule frequency. Data Fetch: Configure APIs for all behavioral data sources. MCDN Router: Set routing rules, thresholds, and team assignments. AI Models: Connect OpenAI/NVIDIA APIs and configure scoring prompts. CRM Integration: Enter credentials for Salesforce, HubSpot, or other CRMs. Dashboard: Link to analytics tools like Tableau or Google Sheets for reporting. Prerequisites API credentials: NVIDIA AI, OpenAI, CRM platform; data sources; spreadsheet/analytics access Use Cases Lead prioritization for sales teams; customer segmentation; automated routing; Customization Adjust routing rules, add custom scoring models, modify team assignments, expand data sources, integrate additional AI providers Benefits Reduces manual lead routing 90%; improves scoring accuracy; accelerates sales cycle; enables data-driven team assignments;
by Rahi
n8n Workflow: AI-Personalized Email Outreach (Smartlead) 🔄 Purpose This workflow automates cold email campaigns by: Fetching leads Generating hyper-personalized email content using AI Sending emails via Smartlead API Logging campaign activity into Google Sheets 🧩 Workflow Structure Schedule Trigger Starts the workflow automatically at scheduled intervals. Ensures continuous campaign execution. Get Leads Fetches lead data (name, email, company, role, industry). Serves as the input for personalization. Loop Over Leads Processes each lead one by one. Maintains individualized email generation. Aggregate Lead Data Collects and formats lead attributes. Prepares structured input for the AI model. Basic LLM Chain #1 Generates personalized snippets/openers using AI. Tailored based on company, role, and industry. Update Row (Google Sheets) Saves AI outputs (snippets) for tracking and QA. Basic LLM Chain #2 Expands snippet into a full personalized email draft. Includes subject line + email body. Information Extractor Extracts structured fields from AI output: Subject Greeting Call-to-Action (CTA) Closing Update Row (Google Sheets) Stores finalized draft in Google Sheets. Provides visibility and audit trail. Code Formats email into Smartlead-compatible payload. Maps fields like subject, body, and recipient details. Smartlead API Request Sends the personalized email through Smartlead. Returns message ID and delivery status. Basic LLM Chain #3 (Optional) Generates follow-up versions for multi-step campaigns. Ensures varied engagement over time. Information Extractor (Follow-ups) Structures follow-up emails into ready-to-send format. Update Row (Google Sheets) Updates campaign logs with: Smartlead send status Message IDs AI personalization notes ⚙️ Data Flow Summary Trigger** → Runs workflow Get Leads** → Fetch lead records LLM Personalization** → Create openers + full emails Google Sheets** → Save drafts & logs Smartlead API** → Send personalized email Follow-ups** → Generate and log structured follow-up messages 📊 Use Case Automates hyper-personalized cold email outreach at scale. Uses AI to improve response rates with contextual personalization. Provides full visibility by saving drafts and send logs in Google Sheets. Integrates seamlessly with Smartlead for sending and tracking.
by Anirudh Aeran
This template creates a comprehensive, production-ready Retrieval-Augmented Generation (RAG) system. It builds a sophisticated AI agent that can answer questions based on documents stored in a specific Google Drive folder, and it automatically keeps its knowledge base up-to-date as you add, update, or remove files. Who’s it for? This workflow is perfect for developers, businesses, and AI agencies looking to: Create an internal knowledge base chatbot for employees (e.g., for HR policies, technical documentation, or project information). Build an intelligent support agent that uses your company's official documents as its source of truth. Develop advanced AI solutions for clients that require a self-maintaining knowledge base. How it works? This workflow is divided into three distinct, powerful systems: The RAG Agent: This is the core chatbot. It receives a user's question, uses a Supabase Vector Store to find the most relevant document snippets, leverages a Cohere Reranker to improve accuracy, and uses a Postgres database to maintain conversation history (memory). It then uses Google Gemini to generate a final, context-aware answer. The Ingestion Pipeline: This system automates the process of learning new information. It triggers whenever a file is created or updated in your designated Google Drive folder. It intelligently detects the file type (Google Doc or PDF), extracts the text, splits it into manageable chunks, generates embeddings using Gemini, and stores them in your Supabase vector database. The Cleanup System: To ensure your knowledge base remains accurate, a scheduled process runs periodically to find and remove data from Supabase that corresponds to files that have been deleted from the Google Drive folder. This prevents the agent from using outdated information. How to set up To get this workflow running, you will need to configure the following: Credentials: Connect your accounts in the n8n credential manager for: Google Drive (OAuth2) Supabase (API Key) Postgres Google Gemini (API Key from Google AI Studio) Cohere (API Key) Google Drive Folder: In the Search files and folders node, replace the placeholder folder ID with the ID of the Google Drive folder you want to monitor. Database Setup: Ensure your Supabase and Postgres instances are set up with the necessary tables. You'll need a documents table in Supabase for the vectors and a document_metadata table in Postgres. How to customize the workflow This template is a powerful starting point. You can easily customize it by: Swapping out the LLM (e.g., use OpenAI or Anthropic instead of Gemini). Changing the vector database (e.g., Pinecone, Weaviate). Adding more data sources, such as Notion, Slack, or websites.
by Rahul Joshi
📘 Description This workflow automates dependency update risk analysis and reporting using Jira, GPT-4o, Slack, and Google Sheets. It continuously monitors Jira for new package or dependency update tickets, uses AI to assess their risk levels (Low, Medium, High), posts structured comments back into Jira, and alerts the DevOps team in Slack — all while logging historical data into Google Sheets for visibility and trend analysis. This ensures fast, data-driven decisions for dependency upgrades, improved code stability, and reduced security risks — with zero manual triage. ⚙️ What This Workflow Does (Step-by-Step) 🟢 When Clicking “Execute Workflow” Manually triggers the dependency risk analysis sequence for immediate review or scheduled monitoring. 📋 Fetch All Active Jira Issues Retrieves all active Jira issues to identify tickets related to dependency or package updates. Provides the complete dataset — including summary, status, and assignee information — for AI-based risk evaluation. ✅ Validate Jira Query Response Verifies that Jira returned valid issue data before proceeding. If data exists → continues filtering dependency updates. If no data or API error → logs the failure to Google Sheets. Prevents workflow from continuing with empty or broken datasets. 🔍 Identify Dependency Update Issues Filters Jira issues to find only dependency-related tickets (keywords like “update,” “bump,” “package,” or “library”). This ensures only relevant version update tasks are analyzed — filtering out unrelated feature or bug tickets. 🏷️ Extract Relevant Issue Metadata Extracts essential fields such as key, summary, priority, assignee, status, and created date for downstream AI processing. Simplifies the data payload and ensures accurate, structured analysis. 📢 Alert DevOps Team in Slack Immediately notifies the assigned DevOps engineer via Slack DM about any new dependency update issue. Includes formatted details like summary, key, status, priority, and direct Jira link for quick access. Ensures rapid visibility and faster response to potential risk tickets. 🤖 AI-Powered Risk Assessment Analyzer Uses GPT-4o (Azure OpenAI) to intelligently evaluate each dependency update’s risk level and impact summary. Considers factors such as: Dependency criticality Version change type (major/minor/patch) Security or EOL indicators Potential breaking changes Outputs a clean JSON with fields: {"risk_level": "Low | Medium | High","impact_summary": "Short human-readable explanation"} Helps DevOps teams prioritize updates with context. 🧠 GPT-4o Language Model Configuration Configures the AI reasoning engine for precise, context-aware DevOps assessments. Optimized for consistent technical tone and cost-efficient batch evaluation. 📊 Parse AI Response to Structured Data Safely parses the AI’s JSON output, removing markdown artifacts and ensuring structure. Adds parsed fields — risk_level and impact_summary — back to the Jira context. Includes fail-safes to prevent crashes on malformed AI output (fallbacks to “Unknown” and “Failed to parse”). 💬 Post AI Risk Assessment to Jira Ticket Automatically posts the AI’s analysis as a comment on the Jira issue: Displays 🤖 AI Risk Assessment Report header Shows Risk Level and Impact Summary Includes a checklist of next steps for developers Creates a permanent audit trail for each dependency decision inside Jira. 📈 Log Dependency Updates to Tracking Dashboard Appends all analyzed updates into Google Sheets, recording: Date Jira Key & Summary Risk Level & Impact Summary Assignee & Status This builds a historical dependency risk database that supports: Trend monitoring Security compliance reviews Dependency upgrade metrics DevOps productivity tracking 📊 Log Jira Query Failures to Error Sheet If the Jira query fails, the workflow automatically logs the error (API/auth/network) into a centralized error sheet for troubleshooting and visibility. 🧩 Prerequisites Jira Software Cloud API credentials Azure OpenAI (GPT-4o) access Slack API connection Google Sheets OAuth2 credentials 💡 Key Benefits ✅ Automated dependency risk assessment ✅ Instant Slack alerts for update visibility ✅ Historical tracking in Google Sheets ✅ Reduced manual triage and faster decision-making ✅ Continuous improvement in release reliability and security 👥 Perfect For DevOps and SRE teams managing large dependency graphs Engineering managers monitoring package updates and risks Security/compliance teams tracking vulnerability fix adoption Product teams aiming for stable CI/CD pipelines
by David Ashby
🛠️ NASA Tool MCP Server Complete MCP server exposing all NASA Tool operations to AI agents. Zero configuration needed - all 15 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every NASA Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n NASA Tool tool with full error handling 📋 Available Operations (15 total) Every possible NASA Tool operation is included: 🔧 Asteroidneobrowse (1 operations) • Get many asteroid neos 🔧 Asteroidneofeed (1 operations) • Get an asteroid neo feed 🔧 Asteroidneolookup (1 operations) • Get an asteroid neo lookup 🔧 Astronomypictureoftheday (1 operations) • Get the astronomy picture of the day 🔧 Donkicoronalmassejection (1 operations) • Get a DONKI coronal mass ejection 🔧 Donkihighspeedstream (1 operations) • Get a DONKI high speed stream 🔧 Donkiinterplanetaryshock (1 operations) • Get a DONKI interplanetary shock 🔧 Donkimagnetopausecrossing (1 operations) • Get a DONKI magnetopause crossing 🔧 Donkinotifications (1 operations) • Get a DONKI notifications 🔧 Donkiradiationbeltenhancement (1 operations) • Get a DONKI radiation belt enhancement 🔧 Donkisolarenergeticparticle (1 operations) • Get a DONKI solar energetic particle 🔧 Donkisolarflare (1 operations) • Get a DONKI solar flare 🔧 Donkiwsaenlilsimulation (1 operations) • Get a DONKI wsa enlil simulation 🔧 Earthassets (1 operations) • Get Earth assets 🔧 Earthimagery (1 operations) • Get Earth imagery 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native NASA Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every NASA Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Nskha
This n8n template provides a comprehensive solution for managing Key-Value (KV) pairs using Cloudflare's KV storage. It's designed to simplify the interaction with Cloudflare's KV storage APIs, enabling users to perform a range of actions like creating, reading, updating, and deleting namespaces and KV pairs. Features Efficient Management**: Handle multiple KV operations seamlessly. User-Friendly**: Easy to use with pre-configured Cloudflare API credentials within n8n. Customizable**: Flexible for integration into larger workflows (Copy / paste your prefered part). Prerequisites n8n workflow automation tool (version 1.19.0 or later). A Cloudflare account with access to KV storage. Pre-configured Cloudflare API credentials in n8n. Workflow Overview This workflow is divided into three main sections for ease of use: Single Actions: Perform individual operations on KV pairs. Bulk Actions: Handle multiple KV pairs simultaneously. Specific Actions: Execute specific tasks like renaming namespaces. Key Components Manual Trigger**: Initiates the workflow. Account Path Node**: Sets the path for account details, a prerequisite for all actions. HTTP Request Nodes**: Facilitate interaction with Cloudflare's API for various operations. Sticky Notes**: Provide quick documentation links and brief descriptions of each node's function. Usage Setup Account Path: Input your Cloudflare account details in the 'Account Path' node. you can get your account path by your cloudflare URL Choose an Action: Select the desired operation from the workflow. Configure Nodes: Adjust parameters in the HTTP request nodes as needed. (each node contain sticky note with direct link to it own document page) Execute Workflow: Trigger the workflow manually to perform the selected operations. Detailed Node Descriptions I covered in this Workflow the full api calls of Cloudflare KV product. API NODE: Delete KV Type**: HTTP Request Function**: Deletes a specified KV pair within a namespace. Configuration**: This node requires the namespace ID and KV pair name. It automatically fetches these details from preceding nodes, specifically from the "List KV-NMs" and "Set KV-NM Name" nodes. Documentation**: Delete KV Pair API API NODE: Create KV-NM Type**: HTTP Request Function**: Creates a new Key-Value Namespace. Configuration**: Users need to input the title for the new namespace. This node uses the account information provided by the "Account Path" node. Documentation**: Create Namespace API API NODE: Delete KV1 Type**: HTTP Request Function**: Renames an existing Key-Value Namespace. Configuration**: Requires the old namespace name and the new desired name. It retrieves these details from the "KV to Rename" and "List KV-NMs" nodes. Documentation**: Rename Namespace API API NODE: Write KVs inside NM Type**: HTTP Request Function**: Writes multiple Key-Value pairs inside a specified namespace. Configuration**: This node needs a JSON array of key-value pairs along with their namespace identifier. It fetches the namespace ID from the "List KV-NMs" node. Documentation**: Write Multiple KV Pairs API API NODE: Read Value Of KV In NM Type**: HTTP Request Function**: Reads the value of a specific Key-Value pair in a namespace. Configuration**: Requires the Key's name and Namespace ID, which are obtained from the "Set KV-NM Name" and "List KV-NMs" nodes. Documentation**: Read KV Pair API API NODE: Read MD from Key Type**: HTTP Request Function**: Reads the metadata of a specific Key in a namespace. Configuration**: Similar to the "Read Value Of KV In NM" node, it needs the Key's name and Namespace ID, which are obtained from the "Set KV-NM Name" and "List KV-NMs" nodes. Documentation**: Read Metadata API > The rest can be found inside the workflow with sticky/onflow note explain what to do. Best Practices Modular Use**: Extract specific parts of the workflow for isolated tasks. Validation**: Ensure correct namespace and KV pair names before execution. Security**: Regularly update your Cloudflare API credentials for secure access, and make sure to give your API only access to the KV. Keywords: Cloudflare KV, n8n workflow automation, API integration, key-value storage management.
by Rahi
Workflow: Track Email Campaign Engagement Analytics with Smartlead and Google Sheets Automatically fetch lead-level email engagement analytics (opens, clicks, replies, unsubscribes, bounces) from Smartlead and update them in Google Sheets. Use this to keep a single, always-fresh source of truth for campaign performance and sequence effectiveness. Summary Pull Smartlead campaign analytics on a schedule and write them to a Google Sheet (append or update). Works with pagination, avoids duplicates via a stable key, and is ready for dashboards, pivots, or BI tools. What This Workflow Does Collects campaign stats from Smartlead (per-lead, per-sequence). Handles pagination safely (offset/limit). Writes to Google Sheets using appendOrUpdate with a matching column to prevent duplicates. Can run on a schedule for near real-time analytics. Node Structure Overview | Step | Node | Purpose | |---|---|---| | 1️⃣ | Schedule Trigger | Starts the workflow on a cadence (e.g., hourly) | | 2️⃣ | Code (Pagination Generator) | Emits {offset, limit} pairs (e.g., 0..9900, step 100) | | 3️⃣ | Split in Batches | Sends each pagination pair to the API sequentially | | 4️⃣ | HTTP Request (Smartlead) | GET /campaigns/{campaign_id}/statistics with offset/limit | | 5️⃣ | Split Out | Turns the API data[] array into one item per lead record | | 6️⃣ | Google Sheets (appendOrUpdate) | Upserts rows by stats_id into EngagedLeads tab | | 7️⃣ | Loop Back | Continues until all batches have been processed | Step-by-Step Setup Prerequisites Smartlead account + API key with access to campaign statistics. Google account + Google Sheets OAuth connected in n8n. Create the Google Sheet Spreadsheet name: Email Analytics (can be anything). Tab name: EngagedLeads. Add these exact headers (first row): lead_name, lead_email, lead_category, sequence_number, stats_id, email_subject, sent_time, open_time, click_time, reply_time, open_count, click_count, is_unsubscribed, is_bounced Configure the Schedule Trigger Choose a frequency (e.g., every 2 hours). If you’re testing, set a single run or a short cadence. Configure the Code Node (Pagination) Emit N items like: { "offset": 0, "limit": 100 } { "offset": 100, "limit": 100 } ... 100 is a good default limit. For up to 10,000 records, generate 100 offsets. Configure the Smartlead API Node Method: GET URL: https://server.smartlead.ai/api/v1/campaigns/{campaign_id}/statistics Query parameters: api_key = <YOUR_SMARTLEAD_API_KEY> offset = {{ $json.offset }} limit = {{ $json.limit }} Map response to JSON. Split Out the Response Use a Split Out (or similar) to iterate over data[] so each lead record is one item. Google Sheets Node (Append or Update) Operation: appendOrUpdate. Document: Your Email Analytics sheet. Sheet/Tab: EngagedLeads. Matching Column: stats_id. Map fields from Smartlead response to sheet columns: lead_name ← lead name (or composed from first/last if provided) lead_email ← email lead_category ← category/type if available sequence_number ← sequence step number stats_id ← stable identifier (e.g., Smartlead stats_id or message id) email_subject ← subject sent_time, open_time, click_time, reply_time ← timestamps open_count, click_count ← integers is_unsubscribed, is_bounced ← booleans If the same stats_id arrives again, the row is updated, not appended. Test and Activate Run once manually to verify API and sheet mapping. Check the sheet for new/updated rows. Activate the workflow to run automatically. Smartlead API Reference (Used by This Workflow) Endpoint** GET https://server.smartlead.ai/api/v1/campaigns/{campaign_id}/statistics Required query parameters** api_key (string) offset (number) limit (number) Typical response (trimmed example)** { "data": [ { "lead_name": "Jane Doe", "lead_email": "jane@example.com", "sequence_number": 2, "stats_id": "15b6ff3a-...-b2b9f343c2e1", "email_subject": "Quick intro", "sent_time": "2025-10-08T10:18:55.496Z", "open_time": "2025-10-08T10:20:10.000Z", "click_time": null, "reply_time": null, "open_count": 1, "click_count": 0, "is_unsubscribed": false, "is_bounced": false } ], "total": 1234 } Google Sheets Structure (Recommended) Spreadsheet: Email Analytics Tab: EngagedLeads Columns:lead_name, lead_email, lead_category, sequence_number, stats_id, email_subject, sent_time, open_time, click_time, reply_time, open_count, click_count, is_unsubscribed, is_bounced Matching Column: stats_id (prevents duplicates and allows updates) Customization Tips Multiple Campaigns** Duplicate the workflow and set a different {campaign_id} and/or write results to a separate tab in your Google Sheet. Batch Size** Increase or decrease the limit value (e.g., 200) in your Code node if you want fewer or more API calls. Filtering** Add a Code or IF node to skip rows where is_bounced = true or is_unsubscribed = true. Dashboards** Create a new tab named Dashboard in Google Sheets and visualize your data using built-in charts or connect it to Looker Studio for advanced visualization. Enrichment** Join this dataset with your CRM data (e.g., HubSpot or Salesforce) using lead_email as a key to gain deeper customer insights. Security and Publishing Notes Do not hardcode** your Smartlead API key in the workflow export. Use n8n credentials or environment variables instead. When sharing the template publicly, replace sensitive values with placeholders like: <YOUR_SMARTLEAD_API_KEY> and <YOUR_GOOGLE_SHEET_ID>. Keep your Google Sheet private unless you intentionally want to share it publicly. Troubleshooting No rows in Sheets** Verify that the API response includes data[], confirm that the Split Out node is configured correctly, and check field mappings. Duplicates** Ensure the Google Sheets node has its matching column set to stats_id. Rate Limits** Increase the schedule interval, add a short Wait node between batches, or reduce the limit size. Mapping Errors** Ensure that column names in Sheets exactly match your field mappings — they are case-sensitive. Timezone Differences** Smartlead timestamps are in UTC. Convert them downstream if your local timezone is different. Example Use Case Run this workflow hourly to maintain a live, company-wide Email Engagement Sheet. Sales teams** can monitor replies and active leads. Marketing teams** can track open and click rates by sequence. Operations** can export monthly summaries — no Smartlead login required. Tags Smartlead EmailMarketing Automation GoogleSheets Analytics CRM MarketingOps
by David Ashby
Complete MCP server exposing 14 Domains-Index API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Domains-Index API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Domains-Index API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to /v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (14 total) 🔧 Domains (9 endpoints) • GET /domains/search: Domains Database Search • GET /domains/tld/{zone_id}: Get TLD records • GET /domains/tld/{zone_id}/download: Download Whole Dataset for TLD • GET /domains/tld/{zone_id}/search: Domains Search for TLD • GET /domains/updates/added: Get added domains, latest if date not specified • GET /domains/updates/added/download: Download added domains, latest if date not specified • GET /domains/updates/deleted: Get deleted domains, latest if date not specified • GET /domains/updates/deleted/download: Download deleted domains, latest if date not specified • GET /domains/updates/list: List of updates 🔧 Info (5 endpoints) • GET /info/api: GET /info/api • GET /info/stat/: Returns overall stagtistics • GET /info/stat/{zone}: Returns statistics for specific zone • GET /info/tld/: Returns overall Tld info • GET /info/tld/{zone}: Returns statistics for specific zone 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Domains-Index API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 9 Api2Pdf - PDF Generation, Powered by AWS Lambda API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Api2Pdf - PDF Generation, Powered by AWS Lambda credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Api2Pdf - PDF Generation, Powered by AWS Lambda API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://v2018.api2pdf.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (9 total) 🔧 Chrome (3 endpoints) • POST /chrome/html: Convert raw HTML to PDF • GET /chrome/url: Convert URL to PDF • POST /chrome/url: Convert URL to PDF 🔧 Libreoffice (1 endpoints) • POST /libreoffice/convert: Convert office document or image to PDF 🔧 Merge (1 endpoints) • POST /merge: Merge multiple PDFs together 🔧 Wkhtmltopdf (3 endpoints) • POST /wkhtmltopdf/html: Convert raw HTML to PDF • GET /wkhtmltopdf/url: Convert URL to PDF • POST /wkhtmltopdf/url: Convert URL to PDF 🔧 Zebra (1 endpoints) • GET /zebra: Generate bar codes and QR codes with ZXING. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Api2Pdf - PDF Generation, Powered by AWS Lambda API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 15 BulkSMS JSON REST API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add BulkSMS JSON REST API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the BulkSMS JSON REST API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.bulksms.com/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (15 total) 🔧 Blocked-Numbers (2 endpoints) • GET /blocked-numbers: Block Phone Number • POST /blocked-numbers: Create a blocked number 🔧 Credit (1 endpoints) • POST /credit/transfer: Transfer Account Credits 🔧 Messages (5 endpoints) • GET /messages: List Related Messages • POST /messages: Send Messages • GET /messages/send: Send message by simple GET or POST • GET /messages/{id}: Show Message • GET /messages/{id}/relatedReceivedMessages: List Related Messages 🔧 Profile (1 endpoints) • GET /profile: Retrieve User Profile 🔧 Rmm (1 endpoints) • POST /rmm/pre-sign-attachment: Generate Attachment Upload URL 🔧 Webhooks (5 endpoints) • GET /webhooks: Update Webhook Settings • POST /webhooks: Create a webhook • DELETE /webhooks/{id}: Delete a webhook • GET /webhooks/{id}: Read a webhook • POST /webhooks/{id}: Update a webhook 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native BulkSMS JSON REST API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.