by zawanah
This n8n workflow demonstrates how to use AI to update your grocery list in Asana via Telegram chat or voice. Use cases Update grocery list details in Asana eg. check or uncheck the items, update expiry dates, update quantities etc. How it works Instruct telegram bot (via chat or voice) to update a grocery item using natural language. For example, "we just bought 10 cartons of milk that expires in 6 months". If via text, just the text message will be sent to the Grocery Agent. If via voice, voice file will be downloaded then transcribed into text using OpenAI. Once Grocery agent receives the text, it will search the item in your grocery list in Asana. It will then check the item since it's bought, and update the quantity and expiry date accordingly. Once task is done, it will respond with the changes it made and insert a hyperlink to Asana if you want to see it. How to set up Set up Telegram bot via Botfather. See setup instructions here Setup OpenAI API for transcription services (Credits required) here Set up Openrouter account. See details here Set up Asana API using the account where you have your grocery list set in. See details here Customization Options You can have other custom fields you use to track other than expiry dates or quantity. For example, food type, date purchased etc. Requirements Asana account where you manage your grocery list Telegram bot Open AI account Open Router account
by Konrad Roziewski
Currently Work-In-Progress This n8n template creates an intelligent AI assistant that responds to chat messages, providing conversational access to your Meta Ads data. Powered by an OpenAI GPT-5 model and equipped with memory to maintain context, this agent can interact with your Meta Ads accounts via the Facebook Graph API. Users can ask it to: List all connected ad accounts.** Retrieve detailed information** for a specific ad account, including active campaigns, ad sets, and individual ads. Fetch performance insights** (e.g., spend, impressions, conversions, CPC, CPM, CTR, ROAS) for a given account and time range. Ideal for marketers, advertisers, or anyone needing quick, conversational access to their Meta Ads performance data and campaign structure without logging into the Ad Manager directly. Requires: OpenAI and Facebook Graph API credentials.
by David Olusola
🎥 Auto-Summarize Zoom Recordings → Slack & Email Never lose meeting insights again! This workflow automatically summarizes Zoom meeting recordings using OpenAI GPT-4 and delivers structured notes directly to Slack and Email. ⚙️ How It Works Zoom Webhook – triggers when a recording is completed. Normalize Data – extracts meeting details + transcript. OpenAI GPT-4 – creates structured meeting summary. Slack – posts summary to your chosen channel. Email – delivers summary to your inbox. 🛠️ Setup Steps 1. Zoom Create a Zoom App with the recording.completed event. Add workflow webhook URL. 2. OpenAI Add your API key to n8n. Use GPT-4 for best results. 3. Slack Connect Slack credentials. Replace YOUR_SLACK_CHANNEL with your channel ID. 4. Email Connect Gmail or SMTP. Replace recipient email(s). 📊 Example Slack Message 📌 Zoom Summary Topic: Sales Demo Pitch Host: alex@company.com Date: 2025-08-30 Summary: Reviewed Q3 sales pipeline Discussed objections handling Assigned action items for next week ⚡ Get instant summaries from every Zoom meeting — no more manual note-taking!
by Ali Muthana
Who’s it for This template is for professionals, students, and investors who want a simple daily finance briefing. It is useful for anyone who follows private equity, mergers & acquisitions, and general market news but prefers short summaries instead of reading long articles. How it works The workflow runs twice a day using a schedule trigger (default 09:00 and 15:00). It pulls articles from three RSS feeds: NYT Private Equity, DealLawyers M&A, and Yahoo Finance. The items are merged and limited to the five most recent stories. A code node formats them into a clean block of text. An AI Agent rewrites each article into a short, engaging 5–6 sentence summary. The results are delivered directly to your inbox via Gmail. How to set up Add your Gmail credential and replace {{RECIPIENT_EMAIL}} with your email. Insert your OpenAI API key. (Optional) Replace the RSS feed URLs with your preferred sources. Adjust the schedule times if needed. Requirements n8n v1.112+ Gmail credential OpenAI API key How to customize You can add more feeds, increase the number of articles, or translate summaries into another language. You can also deliver the summaries to Slack, Notion, or Google Sheets instead of email.
by Shun Nakayama
Instagram Hashtag Generator Workflow This workflow automatically generates optimal hashtags for your Instagram posts by analyzing captions and fetching real-time engagement data. Key Features 100% Official API & Free**: Uses ONLY the official Instagram Graph API. No expensive third-party tools or risky scraping methods are required. Safe & Reliable**: Relying on the official API ensures compliance and long-term stability. Smart Caching**: Includes a Google Sheets caching mechanism to maximize the value of the official API's rate limits (30 searches/7 days). Workflow Overview Caption Input: Set your caption manually or via a workflow trigger. AI Suggestions: GPT-4o-mini analyzes the caption and suggests 10 relevant hashtags, balancing popular (big words) and niche keywords. Official API Search (Instagram Graph API): Fetches Hashtag IDs using the ig_hashtag_search endpoint. Retrieves engagement metrics (Average Likes, Average Comments) using the ID. Selection & Sorting: Sorts candidates by engagement metrics. Selects the top 5 most effective hashtags that balance relevance and engagement. Output: Returns the final list of hashtags as text. Setup Steps Import to n8n: Copy the content of workflow_hashtag_generator.json and paste it into your n8n canvas, or import the file directly. Credentials: OpenAI account: Connect your OpenAI credentials. Facebook Graph account: Connect your Facebook Graph API credentials. Configuration: Instagram Business ID: Update the YOUR_INSTAGRAM_BUSINESS_ACCOUNT_ID placeholder in the Get Hashtag Info and Get Hashtag Metrics nodes with your actual Business Account ID. Google Spreadsheet ID: Update the YOUR_SPREADSHEET_ID placeholder in the Fetch Cached Hashtags and Save to Cache nodes. Adjustments: Filter Logic: You can adjust the sorting or filtering logic in the Aggregate & Rank Candidates node's JavaScript code (e.g., exclude tags with fewer than 1000 posts) if needed. Important Notes on API Limits The official Instagram Hashtag Search API (ig_hashtag_search) allows for 30 unique hashtag queries per rolling 7-day period. Why this is fine**: This workflow caches results in Google Sheets. Once a tag is fetched, it doesn't need to be queried again for a while, allowing you to build up a large database of tags over time without hitting the limit. Recommendation**: Use mock data during initial testing to save your API quota.
by SendPulse
How it works This n8n template automates lead processing from your website. It receives customer data via a Webhook, stores the customer's contact (email or phone number) in the respective SendPulse address books, and uses the SendPulse MCP Server to send personalized welcome messages (email or SMS) generated using AI. The template also includes built-in SendPulse token management logic with caching in the Data Table, which reduces the number of unnecessary API requests. SendPulse’s MCP server is a tool that helps you manage your account through a chat with an AI assistant. It uses SendPulse API methods to get information and perform actions, such as request statistics, run message campaigns, or update user data. MCP server acts as middleware between your AI assistant and your SendPulse account. It processes requests through the SendPulse API and sends results back to chat, so you can manage everything without leaving the conversation. Once connected, the MCP server operates as follows: You ask your AI assistant something in chat. It forwards your request to the MCP server. The MCP server calls the API to get data or perform an action. The AI assistant sends the result back to your chat. Set up Requirements: An active SendPulse account. Client ID and Client Secret from your SendPulse account. An API key from your OpenAI account to power the AI agent. Set up steps: Get your OpenAI API Key - https://platform.openai.com/api-keys Add your OpenAI API Key to OpenAI Chat Model node in n8n workflow. Get your Client ID and Client Secret from your SendPulse account - https://login.sendpulse.com/settings/#api Add your Client ID and Client Secret to Workflow Configuration node. Add your Client ID and Client Secret to SendPulse MCP Client node as headers X-SP-ID і X-SP-SECRET in Multiple Headers Auth. In the Workflow Configuration node, change the names of the mailing lists, senderName, senderEmail, smsSender, routeCountryCode and routeType fileds as needed. Create a tokens table with the columns: hash (string), accessToken (string), tokenExpiry (string) in the Data tables section of your n8n platform account.
by Cheng Siong Chin
How It Works This workflow automates cross-factory operations management by deploying a multi-agent AI system that validates production data, coordinates scheduling, procurement, and quality escalation, then routes outcomes by priority. Designed for manufacturing operations managers, supply chain coordinators, and factory floor teams, it eliminates manual coordination delays and ensures critical issues trigger immediate alerts. A schedule trigger fetches production and supply chain data in parallel, merges them, then passes to an Operations Validation Agent for data integrity checks. A Cross-Factory Coordination Agent orchestrates three sub-agents—Scheduling, Procurement, and Quality Escalation—producing consolidated coordination outputs. Results are routed by priority: high and critical cases trigger dedicated Slack alerts, while routine operations are logged for standard review. Setup Steps Set schedule trigger interval to match operational review frequency. Add OpenAI API credentials to all OpenAI Model nodes. Connect production and supply chain data sources to fetch nodes. Configure Slack credentials for high-priority and critical alert channels. Define priority routing thresholds in the Route by Priority rules node. Prerequisites Slack workspace with bot token Production and supply chain data sources (API or database) Use Cases Automated cross-factory scheduling conflict detection and resolution Customization Add sub-agents for logistics, maintenance, or inventory optimisation Benefits Automates cross-factory coordination across scheduling, procurement, and quality
by Masaki Go
About This Template Turn every sales meeting into a coaching opportunity. This workflow automatically analyzes tldv meeting recordings using OpenAI (GPT-4) to provide instant, actionable feedback to your sales team. It acts as a virtual sales coach, evaluating key performance metrics like listening skills, question quality, and customer engagement without requiring a manager to listen to every call. How It Works Trigger: The workflow starts automatically when a meeting transcript is ready in tldv (via Webhook). Data Retrieval: It fetches the full meeting details and transcript from the tldv API. AI Analysis: GPT-4 analyzes the conversation to score the sales rep's performance (e.g., Speaking vs. Listening balance, Clarity, Next Steps). Delivery: Slack: Sends a summary notification and a detailed markdown report to the team channel. Google Sheets: Archives the scores and meeting data for long-term tracking. Who It’s For Sales Managers:** To monitor team performance and identify coaching needs at scale. Account Executives:** To get immediate feedback on their calls and self-correct. Sales Enablement:** To track KPI trends over time. Requirements n8n** (Cloud or Self-hosted) tldv (Business Plan)** for API/Webhook access OpenAI API Key** (GPT-4 access recommended) Slack** Workspace Google Sheets** Setup Steps Credentials: Configure "Header Auth" for tldv (x-api-key) and OpenAI (Authorization). Connect OAuth for Slack and Google Sheets. Webhook: Copy the Production URL from the first node (Webhook) and add it to your tldv Settings > Integrations > Webhooks (select Event: TranscriptReady). Google Sheets: Create a sheet (e.g., named Sales Feedback) with columns for Meeting Name, Score, Summary, etc. Note: Be sure to update the Google Sheets node in the workflow to match your specific Sheet Name and Column headers.
by Cheng Siong Chin
How It Works This workflow automates misinformation and information manipulation detection using a coordinated multi-agent AI architecture. It is designed for trust and safety teams, media analysts, researchers, and platform moderators who need scalable, structured threat assessment. The pipeline begins when a trigger initiates content analysis. A central Misinformation Detection supervisor agent coordinates three specialised sub-agents: a Narrative Pattern Detector that identifies recurring disinformation themes via semantic clustering, a Bot Behaviour Analyser that detects coordinated inauthentic activity using propagation and temporal pattern tools, and a Manipulation Technique Classifier that maps content to known influence tactics using a risk heatmap and taxonomy tools. Each agent uses a dedicated AI model and memory. Results are passed to a structured output parser, formatted for readability, and appended to Google Sheets for ongoing risk tracking and audit. Setup Steps Connect OpenAI credentials to Supervisor, Narrative, Bot, and Manipulation classifier model nodes. Configure Google Sheets credentials and set target spreadsheet ID in the Store Risk Assessment node. Set memory buffer windows in each sub-agent's Memory node to match your analysis context length. Prerequisites Google Sheets API credentials n8n instance (v1.0+) Access to propagation/temporal data APIs Google account with target Sheet pre-created Use Cases Platform trust and safety teams flagging viral misinformation campaigns Customisation Replace Google Sheets with a database or SIEM output Benefits Parallel multi-agent analysis cuts manual review time significantly
by Rajeet Nair
This workflow implements a cost-optimized AI routing system using n8n. It intelligently decides whether a request should be handled by a low-cost model or escalated to a higher-quality model based on response confidence. The goal is to minimize LLM usage costs while maintaining high answer quality. A query is first processed by a cheaper model. The response is then evaluated by a confidence-scoring AI agent. If the response quality is insufficient, the workflow automatically escalates the request to a more capable model. This approach is useful for building scalable AI systems where most queries can be answered cheaply, while complex queries still receive high-quality responses. How It Works Webhook Trigger Receives a user query from an external application. Workflow Configuration Defines parameters such as: confidence threshold cheap model cost expensive model cost Cheap Model Response The query is first processed using GPT-4o-mini to minimize cost. Confidence Evaluation An AI agent analyzes the response quality. It evaluates accuracy, completeness, clarity, and relevance. Structured Output Parsing The evaluator returns structured data including: confidence score explanation escalation recommendation. Decision Logic If the confidence score is below the configured threshold, the workflow escalates the request. Expensive Model Escalation The query is reprocessed using GPT-4o for a higher-quality answer. Cost Calculation Token usage is analyzed to estimate: total cost cost difference between models. Final Response Formatting The workflow returns: AI response model used confidence score escalation status estimated cost. Setup Instructions Create an OpenAI credential in n8n. Configure the following nodes: Cheap Model (GPT-4o-mini) Expensive Model (GPT-4o) OpenAI Chat Model used by the confidence evaluator agent. Adjust configuration values in the Workflow Configuration node: confidenceThreshold cheapModelCostPer1kTokens expensiveModelCostPer1kTokens Deploy the workflow and send requests to the Webhook URL. Example webhook payload: { "query": "Explain how photosynthesis works." }
by Cheng Siong Chin
How It Works This workflow automates ethics disclosure intake, investigation, risk routing, and escalation for compliance officers, legal teams, and ethics oversight boards. Disclosures arrive via webhook and are processed by a central Governance Agent with persistent memory, supported by four specialised AI sub-agents: Ethics Monitoring Agent (flags policy breaches), Investigation Agent (conducts structured inquiry), Reporting Agent (generates case summaries), and Escalation Agent (determines escalation need). Shared tools include Audit Trail Storage, Policy Database API, and Slack Notification Tool. A Governance Output Parser structures results for a Risk Level Router, which splits cases into critical and standard tracks. Critical cases trigger Slack alerts to the oversight team; all cases are stored and merged before a final response is dispatched. This eliminates manual triage, ensures consistent policy application, and maintains a complete audit trail for regulatory accountability. Setup Steps Configure webhook URL in Ethics Disclosure Webhook with secure authentication. Set AI model credentials (OpenAI/Anthropic) in all agent and model nodes. Connect Slack credentials and oversight channel. Configure Policy Database API with your organisation's ethics policy endpoint or dataset. Connect database/Google Sheets credentials Test with sample disclosure payloads across both risk tracks before activating. Prerequisites Slack workspace and bot token Ethics policy database or API endpoint Database or Google Sheets for case and audit storage Use Cases Automated triage and escalation of employee ethics disclosures in regulated industries Customisation Adjust Risk Level Router thresholds to match organisational severity definitions Benefits Eliminates manual disclosure triage — processes cases consistently at scale
by Connor Provines
Analyze email performance and optimize campaigns with AI using SendGrid and Airtable This n8n template creates an automated feedback loop that pulls email metrics from SendGrid weekly, tracks performance in Airtable, analyzes trends across the last 4 weeks, and generates specific recommendations for your next campaign. The system learns what works and provides data-driven insights directly to your email creation process. Who's it for Email marketers and growth teams who want to continuously improve campaign performance without manual analysis. Perfect for businesses running regular email campaigns who need actionable insights based on real data rather than guesswork. Good to know After 4-6 weeks, expect 15-30% improvement in primary metrics Requires at least 2 weeks of historical data to generate meaningful analysis System improves over time as it learns from your audience Implementation time: ~1 hour total How it works Schedule trigger runs weekly (typically Monday mornings) Pulls previous week's email statistics from SendGrid (delivered, opens, clicks, rates) Updates the previous week's record in Airtable with actual performance data GPT-4 analyzes trends across the last 4 weeks, identifying patterns and opportunities Creates a new Airtable record for the upcoming week with specific recommendations: what to test, how to change it, expected outcome, and confidence level Your email creation workflow pulls these recommendations when generating new campaigns After sending, the actual email content is saved back to Airtable to close the loop How to set up Create Airtable base: Make a table called "Email Campaign Performance" with fields for week_ending, delivered, unique_opens, unique_clicks, open_rate, ctr, decision, test_variable, test_hypothesis, confidence_level, test_directive, implementation_instruction, subject_line_used, email_body, icp, use_case, baseline_performance, success_metric, target_improvement Configure SendGrid: Add API key to the "SendGrid Data Pull" node and test connection Set up Airtable credentials: Add Personal Access Token and select your base/table in all Airtable nodes Add OpenAI credentials: Configure GPT-4 API key in the "Previous Week Analysis" node Test with sample data: Manually add 2-3 weeks of data to Airtable or run if you have historical data Schedule weekly runs: Set workflow to trigger every Monday at 9 AM (or after your weekly campaign sends) Integrate with email creation: Add an Airtable search node to your email workflow to retrieve current recommendations, and an update node to save what was sent Requirements SendGrid account with API access (or similar ESP with statistics API) Airtable account with Personal Access Token OpenAI API access (GPT-4) Customizing this workflow Use different email platform**: Replace SendGrid node with Mailchimp, Brevo, or any ESP that provides statistics API—adjust field mappings accordingly Add more metrics**: Extend Airtable fields to track bounce rate, unsubscribe rate, spam complaints, or revenue attribution Change analysis frequency**: Adjust schedule trigger for bi-weekly or monthly analysis instead of weekly Swap AI models**: Replace GPT-4 with Claude or Gemini in the analysis node Multi-campaign tracking**: Duplicate the workflow for different campaign types (newsletters, promotions, onboarding) with separate Airtable tables