by SendPulse
How it works This n8n template automates lead processing from your website. It receives customer data via a Webhook, stores the customer's contact (email or phone number) in the respective SendPulse address books, and uses the SendPulse MCP Server to send personalized welcome messages (email or SMS) generated using AI. The template also includes built-in SendPulse token management logic with caching in the Data Table, which reduces the number of unnecessary API requests. SendPulseโs MCP server is a tool that helps you manage your account through a chat with an AI assistant. It uses SendPulse API methods to get information and perform actions, such as request statistics, run message campaigns, or update user data. MCP server acts as middleware between your AI assistant and your SendPulse account. It processes requests through the SendPulse API and sends results back to chat, so you can manage everything without leaving the conversation. Once connected, the MCP server operates as follows: You ask your AI assistant something in chat. It forwards your request to the MCP server. The MCP server calls the API to get data or perform an action. The AI assistant sends the result back to your chat. Set up Requirements: An active SendPulse account. Client ID and Client Secret from your SendPulse account. An API key from your OpenAI account to power the AI agent. Set up steps: Get your OpenAI API Key - https://platform.openai.com/api-keys Add your OpenAI API Key to OpenAI Chat Model node in n8n workflow. Get your Client ID and Client Secret from your SendPulse account - https://login.sendpulse.com/settings/#api Add your Client ID and Client Secret to Workflow Configuration node. Add your Client ID and Client Secret to SendPulse MCP Client node as headers X-SP-ID ั X-SP-SECRET in Multiple Headers Auth. In the Workflow Configuration node, change the names of the mailing lists, senderName, senderEmail, smsSender, routeCountryCode and routeType fileds as needed. Create a tokens table with the columns: hash (string), accessToken (string), tokenExpiry (string) in the Data tables section of your n8n platform account.
by Robin
๐ฌ Chat with Your Finances on Telegram Ask questions like โHow much did I spend on food last month?โ and get instant answers from your financial data โ directly in Telegram. This workflow connects your Google Sheets expense log to an AI-powered query engine that understands natural language, resolves ambiguous categories and person names, and sends back a clean formatted summary in Telegram. No spreadsheets. No dashboards. Just chat with your financial data. โก How it works Simply send a message like: How much did we spend on groceries last month? Show me a breakdown by category for this week. The workflow automatically: Parses your intent using GPT-4.1-nano Resolves categories and person names via mapping tables Filters and aggregates expense data from Google Sheets Returns a formatted summary directly in Telegram If a category or person is unknown, the workflow uses AI to suggest the closest match and asks the user to confirm via Telegram inline buttons. Confirmed aliases are saved automatically, making the system self-learning over time. โจ Key Features Natural language queries** in English and German AI intent parsing** with GPT-4.1-nano (time range, person, category, filters) Self-learning entity resolution** for categories and names Interactive disambiguation** via Telegram inline buttons Relative date support** (this week, last month, this year) Group-by breakdowns** by category or person Shared expense filtering** via common_only flag Multi-user support** via Chat ID allowlist ๐ Requirements To run this workflow you need: Telegram Bot** (created via @BotFather) OpenAI API key** Four Google Sheets** Required sheets: expenses expense_categories categories_mapping person_mapping โฑ Setup Time Approx. 15 minutes All required configuration steps are documented directly inside the workflow using "Action Required" notes in each workflow layer. Examples
by George Dan
How it works A web form collects your infographic parameters: headline, topic, art style, layout, color palette, aspect ratio, resolution, etc. An AI Agent (GPT-4 with live web search) researches your topic and writes an optimized prompt for the image generator The prompt is submitted to kie.ai's nano-banana-pro model to generate the infographic The workflow polls kie.ai every 15 seconds until the image is ready (up to ~5 minutes) On success, the finished infographic is emailed to you as an attachment; on failure, a detailed error email is sent instead Set up steps Set up takes about 5 minutes Add your OpenAI API key (for the AI research agent) Add your kie.ai API key as a Header Auth credential (used by the generate and polling nodes) Connect your Gmail account and update the recipient email address in the two Gmail nodes Activate the workflow and open the form URL to start generating
by Abdullah Alshiekh
๐ก What Problem Does It Solve? Businesses waste countless hours manually gathering online insights across platforms. Marketing and strategy teams need fast, structured visibility into what customers are saying and what competitors are promoting across regions and platforms. This workflow automates that process end-to-end by: Monitoring multiple platforms** (Facebook, Instagram, Google) across all selected regions. Extracting and cleaning live data* with precision and compliance through *Decodoโs advanced web intelligence engine**. Providing AI summaries:** It uses specialized AI agents to analyze the raw text and structure it into key insights. Delivering a clear, ready-to-read daily report** directly to your inbox โ no dashboards, no manual effort. โ๏ธ How to Configure It 1. Set Up the Decodo Connection In n8n, create a new Decodo Web Intelligence credential. Paste your Decodo authentication token (available in your Decodo dashboard under โWeb Scraping APIโ). Setup Manual 2. Choose Your Regions and Topics Edit the โSet โ Regionsโ node to list your markets Add your key search topics or terms 3. Review the AI-Generated Insights Decodo** fetches and cleans the latest content from social and web sources. Gemini-based AI agents summarize it into a structured report segmented by region and platform. The workflow emails the insights automatically, providing a quick morning market snapshot. โจ Why It Works So Well Decodo provides the backbone โ real-time, clean, and region-specific data โ while AI transforms that data into business intelligence you can act on. If you need any help Get in Touch
by Masaki Go
About This Template Turn every sales meeting into a coaching opportunity. This workflow automatically analyzes tldv meeting recordings using OpenAI (GPT-4) to provide instant, actionable feedback to your sales team. It acts as a virtual sales coach, evaluating key performance metrics like listening skills, question quality, and customer engagement without requiring a manager to listen to every call. How It Works Trigger: The workflow starts automatically when a meeting transcript is ready in tldv (via Webhook). Data Retrieval: It fetches the full meeting details and transcript from the tldv API. AI Analysis: GPT-4 analyzes the conversation to score the sales rep's performance (e.g., Speaking vs. Listening balance, Clarity, Next Steps). Delivery: Slack: Sends a summary notification and a detailed markdown report to the team channel. Google Sheets: Archives the scores and meeting data for long-term tracking. Who Itโs For Sales Managers:** To monitor team performance and identify coaching needs at scale. Account Executives:** To get immediate feedback on their calls and self-correct. Sales Enablement:** To track KPI trends over time. Requirements n8n** (Cloud or Self-hosted) tldv (Business Plan)** for API/Webhook access OpenAI API Key** (GPT-4 access recommended) Slack** Workspace Google Sheets** Setup Steps Credentials: Configure "Header Auth" for tldv (x-api-key) and OpenAI (Authorization). Connect OAuth for Slack and Google Sheets. Webhook: Copy the Production URL from the first node (Webhook) and add it to your tldv Settings > Integrations > Webhooks (select Event: TranscriptReady). Google Sheets: Create a sheet (e.g., named Sales Feedback) with columns for Meeting Name, Score, Summary, etc. Note: Be sure to update the Google Sheets node in the workflow to match your specific Sheet Name and Column headers.
by Connor Provines
Analyze email performance and optimize campaigns with AI using SendGrid and Airtable This n8n template creates an automated feedback loop that pulls email metrics from SendGrid weekly, tracks performance in Airtable, analyzes trends across the last 4 weeks, and generates specific recommendations for your next campaign. The system learns what works and provides data-driven insights directly to your email creation process. Who's it for Email marketers and growth teams who want to continuously improve campaign performance without manual analysis. Perfect for businesses running regular email campaigns who need actionable insights based on real data rather than guesswork. Good to know After 4-6 weeks, expect 15-30% improvement in primary metrics Requires at least 2 weeks of historical data to generate meaningful analysis System improves over time as it learns from your audience Implementation time: ~1 hour total How it works Schedule trigger runs weekly (typically Monday mornings) Pulls previous week's email statistics from SendGrid (delivered, opens, clicks, rates) Updates the previous week's record in Airtable with actual performance data GPT-4 analyzes trends across the last 4 weeks, identifying patterns and opportunities Creates a new Airtable record for the upcoming week with specific recommendations: what to test, how to change it, expected outcome, and confidence level Your email creation workflow pulls these recommendations when generating new campaigns After sending, the actual email content is saved back to Airtable to close the loop How to set up Create Airtable base: Make a table called "Email Campaign Performance" with fields for week_ending, delivered, unique_opens, unique_clicks, open_rate, ctr, decision, test_variable, test_hypothesis, confidence_level, test_directive, implementation_instruction, subject_line_used, email_body, icp, use_case, baseline_performance, success_metric, target_improvement Configure SendGrid: Add API key to the "SendGrid Data Pull" node and test connection Set up Airtable credentials: Add Personal Access Token and select your base/table in all Airtable nodes Add OpenAI credentials: Configure GPT-4 API key in the "Previous Week Analysis" node Test with sample data: Manually add 2-3 weeks of data to Airtable or run if you have historical data Schedule weekly runs: Set workflow to trigger every Monday at 9 AM (or after your weekly campaign sends) Integrate with email creation: Add an Airtable search node to your email workflow to retrieve current recommendations, and an update node to save what was sent Requirements SendGrid account with API access (or similar ESP with statistics API) Airtable account with Personal Access Token OpenAI API access (GPT-4) Customizing this workflow Use different email platform**: Replace SendGrid node with Mailchimp, Brevo, or any ESP that provides statistics APIโadjust field mappings accordingly Add more metrics**: Extend Airtable fields to track bounce rate, unsubscribe rate, spam complaints, or revenue attribution Change analysis frequency**: Adjust schedule trigger for bi-weekly or monthly analysis instead of weekly Swap AI models**: Replace GPT-4 with Claude or Gemini in the analysis node Multi-campaign tracking**: Duplicate the workflow for different campaign types (newsletters, promotions, onboarding) with separate Airtable tables
by ้ฃฏ็ใๆญฃๅนน
Analyze Furusato Nozei trends from Google News to Slack This workflow acts as a specialized market analyst for Japan's "Furusato Nozei" (Hometown Tax) system. It automates the process of monitoring related news, validating keyword popularity via search trends, and delivering a concise, strategic report to Slack. By combining RSS feeds, AI agents, and real-time search data, this template helps marketers and municipal researchers stay on top of the highly competitive Hometown Tax market without manual searching. ๐ฅ Who is this for? Municipal Government Planners:** To track trending return gifts and competitor strategies. E-commerce Marketers:** To identify high-demand keywords for Furusato Nozei portals. Content Creators:** To find trending topics for blogs or social media regarding tax deductions. Market Researchers:** To monitor the seasonality and shifting interests in the Hometown Tax sector. โ๏ธ How it works News Ingestion: The workflow triggers on a schedule and fetches the latest "Furusato Nozei" articles from Google News via RSS. AI Analysis & Extraction: An AI Agent (using OpenRouter) summarizes the news cluster and identifies the most viable search keyword (e.g., "Scallops," "Travel Vouchers," or specific municipalities). Data Validation: The workflow queries the Google Trends API (via SerpApi) to retrieve search volume history for the extracted keyword in Japan. Strategic Reporting: A second AI Agent analyzes the search trend data alongside the keyword to generate a market insight report. Delivery: The final report is formatted and sent directly to a Slack channel. ๐ ๏ธ Requirements To use this workflow, you will need: n8n** (Version 1.0 or later recommended). OpenRouter API Key** (or you can swap the model nodes for OpenAI/Anthropic). SerpApi Key** (Required to fetch Google Trends data programmatically). Slack Account** (with permissions to post to a channel). ๐ How to set up Configure Credentials: Add your OpenRouter API key to the Chat Model nodes. Add your SerpApi key to the Google Trends API node. Connect your Slack account in the Send a message node. Check the RSS Feed: The RSS Read node is pre-configured for "Furusato Nozei" (ใตใใใจ็ด็จ). You can leave this as is. Regional Settings: The workflow is pre-set for Japan (jp / ja). If you need to change this, check the Workflow Configuration and Google Trends API nodes. Schedule: Enable the Schedule Trigger node to run at your preferred time (default is 9:00 AM JST). ๐จ How to customize Change the Topic:** While this is optimized for Furusato Nozei, you can change the RSS feed URL to track other Japanese market trends (e.g., NISA, Inbound Tourism). Swap AI Models:** The template uses OpenRouter, but you can easily replace the "Chat Model" nodes with OpenAI (GPT-4) or Anthropic (Claude) depending on your preference. Adjust AI Prompts:** The AI prompts are currently in Japanese to match the content. You can modify the system instructions in the AI Agent nodes if you prefer English reports.
by Nitin Garg
How it works Schedule Trigger runs every 6 hours (customizable) Apify Scraper fetches Upwork jobs matching your criteria Deduplication filters out jobs you've already seen AI Scoring (GPT-4) evaluates fit, client quality, budget (0-100 score) Filter keeps only jobs scoring 60+ Proposal Generator creates personalized proposals Google Sheets logs all results Telegram sends summary notification Setup steps Time: ~15 minutes Create Google Sheet with "Job ID" column Get Apify account + Upwork scraper actor Get OpenAI API key Set environment variables: GOOGLE_SHEETS_DOC_ID APIFY_ACTOR_ID TELEGRAM_CHAT_ID Create credentials: Google Sheets, Apify (Header Auth), OpenAI, Telegram Connect credentials to workflow nodes Who is this for? Freelancers actively applying to Upwork jobs Agencies monitoring multiple job categories Consultants prioritizing high-quality leads Estimated costs Per run:** $0.50-3.00 (Apify + OpenAI) Monthly (4x/day):** $50-200
by Panth1823
Keep your job listings database clean without manual checks. Every three days, this workflow fetches all active jobs from your Postgres database, runs each application URL through a validation check, identifies dead links via HTTP status codes and soft-404 redirect detection, then marks failed entries as inactive in both Supabase and Google Sheets simultaneously. Who it's for Teams running a job aggregator, career platform, or internal hiring tracker who store job listings in Postgres and want stale or broken apply links removed automatically โ without waiting for user reports. How it works A Schedule Trigger fires every 3 days All active jobs are fetched from your Postgres (Supabase) database via a SQL query A Prepare URLs node filters out any rows with missing, malformed, or non-HTTP URLs before they're checked An HTTP Request node sends a HEAD request to each apply_url A Find Dead Jobs code node analyzes each response and flags a job as dead if: Status code is 404 or 410 DNS resolution fails (ENOTFOUND) Connection is refused (ECONNREFUSED) A 301/302/307 redirect points to a different path โ indicating the job was removed and the ATS is silently redirecting (soft-404 detection) If dead jobs are found, an IF node routes them to both update nodes in parallel: Supabase (Postgres) โ status set to inactive via parameterized SQL Google Sheets โ row updated to reflect the new status If no dead jobs are detected, the workflow exits cleanly with no writes Setup Connect your Postgres credentials and confirm the query in the Fetch Active Jobs node matches your table and column names (apply_url, job_hash, job_title) Connect your Google Sheets credentials and set the Resource ID and Sheet Name in the Mark Inactive node Confirm the inactive status value in the Postgres update query matches what your app expects (Optional) Adjust the soft-404 redirect detection logic in the Find Dead Jobs node if your ATS platforms use non-standard redirect patterns Database columns expected job_hash (unique identifier), apply_url, job_title, status Requirements Self-hosted or cloud n8n instance Supabase (or any Postgres-compatible) database with an active jobs table Google Sheets with a matching jobs log
by Edson Encinas
This workflow automatically collects the latest technology news, filters for emerging topics, and uses AI to score relevance and generate clean, ready-to-share content. It helps you focus on high-impact updates while eliminating noise, making it ideal for curated tech feeds or internal intelligence channels. Whoโs it for Automation engineers and builders using n8n Tech content curators and researchers Teams tracking AI, cybersecurity, or emerging tech trends How it works / What it does The workflow pulls articles from an RSS feed (e.g., TechCrunch) on a schedule. It filters recent and relevant topics, removes duplicates using Google Sheets, and sends each article to an AI model. The AI assigns an innovation score, generates a tweet, and suggests an image concept. Results are normalized, filtered by score (e.g., โฅ8), ranked, and limited to top items. Finally, formatted messages or tweet are sent to Slack and logged for tracking. How to set up Add your RSS feed URL Configure OpenAI API credentials Connect Google Sheets for deduplication and logging. Create a sheet with the following columns: link guid creator pub_date tweet score date_posted Set up a Slack webhook or Slack node Requirements n8n (self-hosted or cloud) OpenAI API key Google Sheets access Slack workspace with webhook enabled How to customize the workflow Adjust score threshold to control content quality Modify the AI prompt for tone or niche focus Change output destination (X, Telegram, email) Extend logging fields for analytics or reporting
by ้ท่ฐทใ็ๅฎ
Who is this for This template is perfect for sales professionals, account managers, and business development teams who want to make memorable impressions on their clients. It automates the tedious task of researching gift shops and preparation spots before important meetings. What it does This workflow automatically prepares personalized recommendations for client visits by monitoring your Google Calendar, enriching data from Notion, and using AI to select the perfect options. How it works Trigger: Activates when a calendar event containing keywords like "visit," "meeting," "client," or "dinner" is created or updated Extract: Parses company name from the event title Enrich: Fetches customer preferences from your Notion database Search: Google Places API finds nearby gift shops and quiet cafes Analyze: GPT-4 recommends the best options based on customer preferences Notify: Sends a personalized message to Slack with recommendations Example Slack Output Here's what the final notification looks like: ๐ Recommended Gift Shop Patisserie Sadaharu AOKI (โ 4.6) 3-5-2 Marunouchi, Chiyoda-ku ๐ก Reason: The customer loves French desserts, so this patisserie's macarons would be perfect! โ Pre-Meeting Cafe Starbucks Reserve Roastery (โ 4.5) 5 min walk from meeting location Set up steps Setup time: approximately 15 minutes Google Calendar: Connect your Google Calendar account and select your calendar Notion Database: Create a customer database with "Company Name" (title) and "Preferences" (text) fields Google Places API: Get an API key from Google Cloud Console and add it to the Configuration node OpenAI: Connect your OpenAI account for AI-powered recommendations Slack: Connect your Slack workspace and update the channel ID in the final node Requirements Google Calendar account Notion account with a customer database Google Places API key (requires Google Cloud account) OpenAI API key Slack workspace with bot permissions How to customize Search radius: Adjust the searchRadius parameter in the Configuration node (default: 1000 meters) Event keywords: Modify the Filter node conditions to match your calendar naming conventions Notification channel: Change the Slack channel ID to your preferred channel
by Jinash Rouniyar
PROBLEM Managing multiple RAG AI agents can be complex when each has its own purpose and vector database. Manually tracking agents and deciding which one to query wastes time. LLMs often struggle to determine which agent best fits a userโs request. This workflow enables automated multi-agent orchestration, dynamically selecting and querying the correct agent using Contextual AI Query Tool and Gemini 2.5 Flash. How it works A form trigger allows users to create new agents by specifying a name, description, datastore, and uploading files. A new agent is created with the provided information and files are ingested in the datastore We get the status of file ingestion every 30 seconds until the ingestion process is complete When users send queries, the Agent Orchestrator identifies the most relevant agent to generate grounded, context-aware responses. Note: The document ingestion process is asynchronous and may take a few minutes before your agent has the document fully available in the datastore for querying. How to set up Create a free Contextual AI account and obtain your CONTEXTUALAI_API_KEY. Add CONTEXTUALAI_API_KEY as an environment variable in n8n. For the baseline model, we have used Gemini 2.5 Flash Model, you can find your Gemini API key here How to customize the workflow Replace the Form Trigger with a Webhook Trigger or manual input to integrate with custom systems. Swap Gemini 2.5 Flash with another LLM provider Update the wait time as per user requirement Modify the system prompt to fine-tune how the orchestration logic selects and queries agents. You can check out this Contextual AI API reference for more details on agent creation and usage. If you have feedback or need support, please email feedback@contextual.ai.