by Felix Kemeth
Overview Staying up to date with fast-moving topics like AI, machine learning, research, or your specific industry can be tricky. To solve this for myself (for me, it is mostly AI and automation topics), I built and use this n8n workflow: it pulls fresh articles using NewsAPI based on my topics of interest, lets an AI agent pick the 5 most relevant ones, enriches them with a Tavily search engine, and sends a clean, readable newsletter straight to Telegram - in the language you specify. In this post, I'll: Explain what the workflow does and why it's useful Show you how to import and configure it step by step Highlight the main advantages and common customisations Outline concrete next steps and improvements After following this guide, you'll end up with a fully automated weekly newsletter that delivers relevant news on the topics you care about - without any manual work. This is ideal if you already run n8n and want a mostly no‑code way to get a curated weekly digest in Telegram. What this workflow does At a high level, this workflow: Runs on a schedule (weekly at 9:00 on Sundays by default) Automatically finds recent, relevant news via NewsAPI for your topics of interest Lets AI select the top 5 most relevant news Uses a Tavily-powered AI agent to fact-check and enrich each article Aggregates the final results into a compact newsletter in the language you specify Sends them as a Markdown-formatted Telegram message The result: every week you get an AI-picked, enriched mini-newsletter with the latest news based on your own interests - delivered in Telegram. Requirements To run this workflow, you need: NewsAPI key** Create an account here and generate an API key - it is free. Tavily API key** You can sign up here and create an API key. They also have a generous free tier. OpenAI API key** Get one from OpenAI - we need this for the LLM agent calls. Telegram bot + chat ID** A Telegram bot (via BotFather) and the chat/channel ID where you want the newsletter. It is also free. See for example here how to set that up. How it works The exact logic of the workflow is as follows: Schedule Trigger** Runs the workflow on a fixed interval (in this version: weekly, at 9:00 on Sundays). Set topics and language** A Set node that defines topics (my default is AI,n8n - use a comma-separated list) and language (here I have English, but choose what you prefer). Change these to match your interests (e.g. health,fitness, macroeconomics,markets, climate,policy, or anything you care about). Call NewsAPI** HTTP Request node calling the NewsAPI API. It uses as arguments: from: last 7 days¹ q: the query, built from your topics (topics like AI,n8n become AI OR n8n expected by the API)² sortBy: relevancy - the most relevant ones at the top of the results returned Auth is handled via an httpQueryAuth credential, where your NewsAPI key is passed as a query parameter. AI Topic Selector** An OpenAI - Message a model node using gpt-5.1 via your OpenAI API key with the following prompt: You are an assistant that selects the most relevant news articles for a user. Instructions: Choose the 5 most relevant non-overlapping articles based on the user topics. For each article, provide: title short summary (1–2 sentences) source name url Output the results in the language specified by the user. Output as a "articles" JSON array of objects, each with "title", "summary", "source" and "url". User topics of interest: {{ $('Set topics and language').item.json.topics }} Output language: {{ $('Set topics and language').item.json.language }} NewsAPI articles: {{ $json.articles.map( article => `Title: ${article.title} Description: ${article.description} Content: ${article.content} Source: ${article.source.name} URL: ${article.url}` ).join('\n---\n') }} The prompt instructs the model to read your topics and language, look at all articles from the NewsAPI call (it returns a maximum of 100), select the 5 most relevant, non-overlapping articles, and output a JSON array with title, summary, source and url. Split Out** Splits out the AI message so each article becomes its own item. This lets the downstream AI agent work on each article individually. Under the hood, we parse the JSON array returned by the AI into individual items, so that each article becomes its own item in n8n. This lets the AI Agent node enrich each article separately. Newsletter AI Agent** An AI Agent node with gpt-5.1 as model, again accessed via your OpenAI API key. The agent takes the initial title, summary, source and url, uses the Tavily search tool to find 2–3 reliable, recent sources, and writes a concise 1–3 sentence article in the language you specified. The prompt for the model is shown below. You are a research writer that updates short news summaries into concise, factual articles. Input: Title: {{ $json["title"] }} Summary: {{ $json["summary"] }} Source: {{ $json["source"] }} Original URL: {{ $json["url"] }} Language: {{ $('Set topics and language').item.json.language }} Instructions: Use Tavily Search to gather 2–3 reliable, recent, and relevant sources on this topic. Update the title if a more accurate or engaging one exists. Write 1–3 sentences summarizing the topic, combining the original summary and information from the new sources. Return the original source name and url as well. Output (JSON): { "title": "final article title", "content": "concise 1–3 sentence article content", "source": "the name of the original source", "url": "the url of the original source" } Rules: Ensure the topic is relevant, informative, and timely. Translate the article if necessary to comply with the desired language {{ $('Set topics and language').item.json.language }}. In particular, the prompt instructs the model to Use Tavily Search to gather 2–3 reliable, recent, and relevant sources on this topic. Update the title if a more accurate or engaging one exists Write 1–3 sentences summarizing the topic, combining the original summary and information from the new sources Reply in a pre-defined JSON format including the original source name and url. The Output Parser enforces a structured JSON output with title, content, source and url as fields. Because the model is allowed to adjust titles, you may occasionally see slightly different titles than in the original feed; if you prefer minimal changes, you can tighten the prompt to only allow small tweaks. Aggregate** Aggregate node collecting the output field from the agent. Combines the individual article objects back into one array to be used for messaging. Send a text message** A Telegram - Send a text message node that uses your Telegram bot credentials and chatId. Renders each article as title, content plus Source: source. > To adjust this workflow for your needs, open the Set topics and language node to tweak topics (comma-separated, like AI,startups,LLMs or web dev,TypeScript,n8n) and switch the language to any target language, then inspect the Schedule Trigger to adjust interval and time, e.g. weekly at 07:30. These two tweaks control the content topics of your newsletter and when you will receive it. Why this workflow is powerful End-to-end automation** From news discovery to curated delivery, everything is automated. AI-driven topic relevance** Instead of naïvely listing every headline the AI filters for relevance to your topics and avoids overlapping or duplicate stories. Grounded in facts** By using NewsAPI and Tavily, the newsletter stays fact-based, i.e. you get short, factual summaries grounded in multiple sources. Flexibility** A single parameter (language) lets you specify the output language, while the Schedule Trigger lets you set the frequency. Low friction and mobile-first** Using Telegram as a consumption surface provides quick, low-friction reading, with push notifications as notifiers. Next steps Here are concrete directions to take this workflow further: RAG-workflow for better topic selection** Use a Retrieval-Augmented Generation pattern to let the model better choose topics that align with your evolving preferences. Right now, all news articles go into the prompt, which may bias the model to pick articles that appear first. Prompt iteration and evaluation framework** Systematically experiment with different selection criteria (e.g. "more technical", "more beginner-friendly"), tone and length of the newsletter. Logging using n8n data tables** Persist previous newsletter to avoid repetition and for better debugging. Using the source links provided in the newsletter, track which articles were clicked to enable 1:1 personalization. Email with HTML template** For more flexibility, send the newsletter via email. Trigger based on news relevance** Instead of (or in addition to) a fixed schedule, compute a "relevance score" or "novelty score" across articles. Trigger only when the score crosses a threshold. Incorporating other news APIs or RSS feeds** Add more sources such as other news APIs and RSS feeds from blogs, newsletters, or communities. Adjust for arxiv paper search and research news** Swap NewsAPI for arxiv search or other academic sources to obtain a personal research digest newsletter. Add 1:1 personalization by tracking URL clicks** Use n8n data tables to track which URLs have been clicked. Use this information as input to future AI runs to refine the news suggestions. Audio and video news** Use audio or video models for better news communication. Wrap-up This workflow shows how I use n8n, NewsAPI, Tavily, OpenAI, and Telegram to create a personal weekly newsletter. It’s mostly no-code, easy to customize, and something I rely on myself to stay informed without spending time browsing news manually. Contact me here, visit my website, or connect with me on LinkedIn. Footnotes we do that here with the JS expression ={{ DateTime.fromISO($json.timestamp).minus({ days: 7 }) }} we do that here with the JS expression {{ $json.topics.replaceAll("," , " OR ") }}
by Lucas Peyrin
How it works This workflow creates a sophisticated, self-improving customer support system that automatically handles incoming emails. It's designed to answer common questions using an AI-powered knowledge base and, crucially, to learn from human experts when new or complex questions arise, continuously expanding its capabilities. Think of it like having an AI assistant with a smart memory and a human mentor. Here's the step-by-step process: New Email Received: The workflow is triggered whenever a new email arrives in your designated support inbox (via Gmail). Classify Request: An AI model (Google Gemini 2.5 Flash Lite) first classifies the incoming email to ensure it's a genuine support request, filtering out irrelevant messages. Retrieve Knowledge Base: The workflow fetches all existing Question and Answer pairs from your dedicated Google Sheet knowledge base. AI Answer Attempt: A powerful AI model (Google Gemini 2.5 Pro) analyzes the customer's email against the entire knowledge base. It attempts to find a highly relevant answer and drafts a complete HTML email response if successful. Decision Point: An IF node checks if the AI found a confident answer. If Answer Found: The AI-generated HTML response is immediately sent back to the customer via Gmail. If No Answer Found (Human-in-the-Loop): Escalate to Human: The customer's summarized question and original email are forwarded to a human expert (you or your team) via Gmail, requesting their assistance. Human Reply & AI Learning: The workflow waits for the human expert's reply. Once received, another AI model (Google Gemini 2.5 Flash) processes both the original customer question and the expert's reply to distill them into a new, generic, and reusable Question/Answer pair. Update Knowledge Base: This newly created Q&A pair is then automatically added as a new row to your Google Sheet knowledge base, ensuring the system can answer similar questions automatically in the future. Set up steps Setup time: ~10-15 minutes This workflow requires connecting your Gmail and Google Sheets accounts, and obtaining a Google AI API key. Follow these steps carefully: Connect Your Gmail Account: Select the On New Email Received node. Click the Credential dropdown and select + Create New Credential to connect your Gmail account. Grant the necessary permissions. Repeat this for the Send AI Answer and Ask Human for Help nodes, selecting the credential you just created. Connect Your Google Sheets Account: Select the Get Knowledge Base node. Click the Credential dropdown and select + Create New Credential to connect your Google account. Grant the necessary permissions. Repeat this for the Add to Knowledge Base node, selecting the credential you just created. Set up Your Google Sheet Knowledge Base: Create a new Google Sheet in your Google Drive. Rename the first sheet (tab) to QA Database. In the first row of QA Database, add two column headers: Question (in cell A1) and Answer (in cell B1). Go back to the Get Knowledge Base node in n8n. In the Document ID field, select your newly created Google Sheet. Do the same for the Add to Knowledge Base node. Get Your Google AI API Key (for Gemini Models): Visit Google AI Studio at aistudio.google.com/app/apikey. Click "Create API key in new project" and copy the key. In the workflow, go to the Google Gemini 2.5 Pro node, click the Credential dropdown, and select + Create New Credential. Paste your key into the API Key field and Save. Repeat this for the Google Gemini 2.5 Flash Lite and Google Gemini 2.5 Flash nodes, selecting the credential you just created. Configure Human Expert Email: Select the Ask Human for Help node. In the Send To field, replace the placeholder email address with the actual email address of your human expert (e.g., your own email or a team support email). Activate the Workflow: Once all credentials and configurations are set, activate the workflow using the toggle switch at the top right of your n8n canvas. Start Learning! Send a test email to the Gmail account connected to the On New Email Received node. Observe how the AI responds, or how it escalates to your expert email and then learns from the reply. Check your Google Sheet to see new Q&A pairs being added!
by Davide
This workflow automates the end-to-end analysis of WooCommerce product reviews, transforming raw customer feedback into actionable product and customer-care insights, and delivering them in a structured, visual, and shareable format. This workflow analyzes product review sentiment from WooCommerce using AI. It starts by retrieving reviews for a specified product via the WooCommerce. Each review then undergoes sentiment analysis using LangChain's Sentiment Analysis. The workflow aggregates sentiment data, creates a pie chart visualization via QuickChart, and compiles a comprehensive report using an AI Agent. The report includes executive summaries, quantitative data, qualitative analysis, product diagnostics, and operational recommendations. Finally, the AI-generated report is converted to HTML and emailed to a designated recipient for review by customer and product teams. Key Advantages 1. ✅ Full Automation of Review Analysis Eliminates manual work by automating data collection, sentiment analysis, reporting, visualization, and delivery in a single workflow. 2. ✅ Scalable and Reliable Batch processing ensures the workflow can handle dozens or hundreds of reviews without performance issues. 3. ✅ Action-Oriented Insights (Not Just Sentiment) Instead of stopping at sentiment scores, the workflow produces: Root-cause hypotheses Concrete improvement actions Prioritized recommendations (P0 / P1 / P2) Measurable KPIs 4. ✅ Combines Quantitative and Qualitative Analysis Merges hard metrics (averages, distributions, outliers) with qualitative insights (themes, risks, opportunities), giving a 360° view of customer feedback. 5. ✅ Visual + Narrative Output Stakeholders receive both: Visual sentiment charts** for quick understanding Structured written reports** for strategic decision-making 6. ✅ Ready for Product & Customer Care Teams The output format is tailored for non-technical teams: Clear language Masked personal data (GDPR-friendly) Immediate usability in meetings, emails, or documentation 7. ✅ Easily Extensible The workflow can be extended to: Run on a schedule Analyze multiple products Store results in a database or CRM Trigger alerts for negative sentiment spikes Ideal Use Cases Continuous monitoring of product sentiment Supporting product roadmap decisions Identifying customer pain points early Improving customer support response strategies Reporting customer voice to stakeholders automatically How it works Manual Trigger & Configuration The workflow starts manually and sets the target WooCommerce product ID and store URL. Data Retrieval from WooCommerce Fetches all reviews for the selected product via the WooCommerce REST API. Retrieves product details (name, description, categories) to enrich the analysis context. Batch Processing of Reviews Reviews are processed in batches to ensure scalability and reliability, even with a large number of reviews. AI-Powered Sentiment Analysis Each review is analyzed using an OpenAI-based sentiment analysis model. For every review, the workflow extracts: Sentiment category (Positive / Negative / Neutral) Strength (intensity) Confidence (reliability of the classification) Data Normalization & Aggregation Review text is cleaned and structured. Sentiment data is aggregated to compute overall distributions and metrics. Visual Sentiment Distribution A pie chart is dynamically generated via QuickChart to visually represent sentiment distribution. Advanced AI Insight Generation A specialized AI agent (“Product Insights Analyst”) transforms the raw and aggregated data into a professional, structured report, including: Executive summary Quantitative statistics Qualitative themes Product diagnosis Operational recommendations Product backlog ideas Next steps HTML Conversion & Delivery The report is converted into clean HTML. The final output is automatically sent via email to stakeholders (e.g. product or customer care teams). Set up steps Configure credentials: Set up WooCommerce API credentials in the HTTP Request node. Add OpenAI API credentials for both sentiment analysis and reporting. Configure Gmail OAuth2 credentials for sending the final email report. Set parameters: In the "Product ID" node, replace PRODUCT_ID and YOUR_WEBSITE with actual product ID and WooCommerce site URL. Update the recipient email address in the "Send a message" node. Optional adjustments: Modify the pie chart design in the "QuichChart" node if needed. Adjust the report structure or language in the "Product Insights Analyst" system prompt. Run the workflow: Click "Execute workflow" on the manual trigger to start the process. Monitor execution in n8n to ensure all nodes process correctly. Once configured, the workflow will automatically analyze product reviews, generate insights, and deliver a formatted report via email. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Anshul Chauhan
Automate Your Life: The Ultimate AI Assistant in Telegram (Powered by Google Gemini) Transform your Telegram messenger into a powerful, multi-modal personal or team assistant. This n8n workflow creates an intelligent agent that can understand text, voice, images, and documents, and take action by connecting to your favorite tools like Google Calendar, Gmail, Todoist, and more. At its core, a powerful Manager Agent, driven by Google Gemini, interprets your requests, orchestrates a team of specialized sub-agents, and delivers a coherent, final response, all while maintaining a persistent memory of your conversations. Key Features 🧠 Intelligent Automation: Uses Google Gemini as a central "Manager Agent" to understand complex requests and delegate tasks to the appropriate tool. 🗣️ Multi-Modal Input: Interact naturally by sending text, voice notes, photos, or documents directly into your Telegram chat. 🔌 Integrated Toolset: Comes pre-configured with agents to manage your memory, tasks, emails, calendar, research, and project sheets. 🗂️ Persistent Memory: Leverages Airtable as a knowledge base, allowing the assistant to save and recall personal details, company information, or past conversations for context-rich interactions. ⚙️ Smart Routing: Automatically detects the type of message you send and routes it through the correct processing pipeline (e.g., voice is transcribed, images are analyzed). 🔄 Conversational Context: Utilizes a window buffer to maintain short-term memory, ensuring follow-up questions and commands are understood within the current conversation. How It Works The Telegram Trigger node acts as the entry point, receiving all incoming messages (text, voice, photo, document). A Switch node intelligently routes the message based on its type: Voice**: The audio file is downloaded and transcribed into text using a voice-to-text service. Photo**: The image is downloaded, converted to a base64 string, and prepared for visual analysis. Document**: The file is routed to a document handler that extracts its text content for processing. Text**: The message is used as-is. A Merge node gathers the processed input into a unified prompt. The Manager Agent receives this prompt. It analyzes the user's intent and orchestrates one or more specialized agents/tools: memory_base (Airtable): For saving and retrieving information from your long-term knowledge base. todo_and_task_manager (Todoist): To create, assign, or check tasks. email_agent (Gmail): To compose, search, or send emails. calendar_agent (Google Calendar): To schedule events or check your agenda. research_agent (Wikipedia/Web Search): To look up information. project_management (Google Sheets): To provide updates on project trackers. After executing the required tasks, the Manager Agent formulates a final response and sends it back to you via the Telegram node. Setup Instructions Follow these steps to get your AI assistant up and running. Telegram Bot: Create a new bot using the BotFather in Telegram to get your Bot Token. In the n8n workflow, configure the Telegram Trigger node's webhook. Add your Bot Token to the credentials in all Telegram nodes. For proactive messages, replace the chatId placeholders with your personal Telegram Chat ID. Google Gemini AI: In the Google Gemini nodes, add your credentials by providing your Google Gemini API key. Airtable Knowledge Base: Set up an Airtable base to act as your assistant's long-term memory. In the memory_base nodes (Airtable nodes), configure the credentials and provide the Base ID and Table ID. Google Workspace APIs: Connect your Google account credentials for Gmail, Google Calendar, and Google Sheets. In the relevant nodes, specify the Document/Sheet IDs you want the assistant to manage. Connect Other Tools: Add your credentials for Todoist and any other integrated tool APIs. Configure Conversational Memory: This workflow is designed for multi-user support. Verify that the Session Key in the "Window Buffer Memory" nodes is correctly set to a unique user identifier from Telegram (e.g., {{ $json.chat.id }}). This ensures conversations from different users are kept separate. Review Schedule Triggers: Check any nodes designed to run on a schedule (e.g., "At a regular time"). Adjust their cron expressions, times, and timezone to fit your needs (e.g., for daily summaries). Test the Workflow: Activate the workflow. Send a text message to your bot (e.g., "Hello!"). Estimated Setup Time 30–60 minutes:** If you already have your API keys, account credentials, and service IDs (like Sheet IDs) ready. 2–3 hours:** For a complete, first-time setup, which includes creating API keys, setting up new spreadsheets or Airtable bases, and configuring detailed permissions.
by Automate With Marc
Social Media Post & Caption Generator (Google Drive → AI Caption → Approval → Auto-Post) Automatically turn your existing content library into approved, AI-written social media posts. This workflow selects a random file from Google Drive, generates an Instagram caption using AI, sends it to you for approval, and—once approved—uploads and publishes the post via Blotato. 🎥 Watch Step-By-Step Guide: https://youtu.be/9XU9ECcj9dg What this template does On a scheduled basis (default: 10:00 AM), this workflow: Searches a specified Google Drive folder for content files Randomly selects one file to avoid repetitive posting Uses AI to generate an Instagram-ready caption based on the file name Sends the caption + file link to you via email for approval If approved: Downloads the file from Drive Uploads the media to Blotato Creates and publishes the social media post If rejected: Automatically loops back and selects a different random file Why it’s useful Keeps your social media consistent with minimal manual effort Adds a human-in-the-loop approval step for quality control Eliminates the need to manually write captions or pick content Ideal for creators, solo marketers, and small teams managing content at scale Requirements Before using this template, connect the following credentials in n8n: Google Drive OAuth (searching & downloading files) OpenAI API (caption generation) Gmail OAuth (approval email workflow) Blotato API (media upload & social posting) All credentials must be added manually after importing the template. No sensitive data is included in the template. How it works (Node overview) Schedule Trigger Runs the workflow at a fixed time each day. Google Drive – Search Files and Folders Fetches all files from a specified Drive folder. Randomizer (Code Node) Selects a random file from the available list to ensure content variety. Caption Generator AI Uses an AI model to generate a descriptive Instagram caption based on the file name. Gmail – Send for Approval and Wait Emails the generated caption and file link to you and pauses execution until approval or rejection. IF (Approved) Yes: proceeds to download, upload, and publish No: loops back to select another random file Google Drive – Download File Downloads the approved content file. Blotato – Upload Media & Create Post Uploads the media and publishes the post to the connected social account. Setup instructions Import the template into your n8n workspace Open the Google Drive nodes and connect your Drive OAuth credential Replace the Folder ID with your own content folder Connect your OpenAI credential in the Caption Generator node Connect Gmail OAuth and set your approval email address Connect your Blotato account and select the target social profile Run the workflow once to test the approval loop Activate the workflow to start automated posting Customization ideas Adjust the AI system prompt to change tone (funny, educational, sales-focused) Add hashtag rules (e.g. max 5 hashtags, niche-specific only) Replace random selection with “least recently posted” logic using a Data Table Duplicate the Blotato node to post to multiple platforms Add a fallback step to auto-edit captions that exceed character limits Troubleshooting No files found: confirm the Google Drive folder ID and permissions Approval email not received: check Gmail OAuth scopes and spam folder Caption quality not ideal: refine the AI system prompt Upload fails: confirm Blotato account permissions and social account connection
by Mariela Slavenova
This template enriches a lead list by analyzing each contact’s LinkedIn activity and auto-generating a single personalized opening line for cold outreach. Drop a spreadsheet into a Google Drive folder → the workflow parses rows, fetches LinkedIn content (recent post or profile), uses an LLM to craft a one-liner, writes the result back to Google Sheets, and sends a Telegram summary. ⸻ Good to know • Works with two paths: • Recent post found → personalize from the latest LinkedIn post. • No recent post → personalize from profile fields (headline, about, current role). • Requires valid Apify credentials for LinkedIn scrapers and LLM keys (Anthropic and/or OpenAI). • Costs depend on the LLM(s) you choose and scraping usage. • Replace all placeholders like [put your token here] and [put your Telegram Bot Chat ID here] before running. • Respect the target platform’s terms of service when scraping LinkedIn data. What this workflow does Trigger (Google Drive) – Watches a specific folder for newly uploaded lead spreadsheets. Download & Parse – Downloads the file and converts it to structured items (first name, last name, company, LinkedIn URL, email, website). Batch Loop – Processes each row individually. Fetch Activity – Calls Apify LinkedIn Profile Posts (latest post) and records current date for recency checks. Recency Check (LLM) – An OpenAI node returns true/false for “post is from the current year.” Branching • If TRUE → AI Agent (Anthropic) crafts a single, natural reference line based on the recent post. • If FALSE → Apify LinkedIn Profile → AI Agent (Anthropic) crafts a one-liner from profile data (headline/about/current role). Write Back (Google Sheets) – Updates the original sheet by matching on email and writing the personalization field. Notify (Telegram) – Sends a brief completion summary with sheet name and link. Requirements • Google Drive & Google Sheets connections • Apify account + token for LinkedIn scrapers • LLM keys: Anthropic (Claude) and/or OpenAI (you can use one or both) • Telegram bot for notifications (bot token + chat ID) How to use Connect credentials for Google, Apify, OpenAI/Anthropic, and Telegram. Set your folder in the Google Drive Trigger to the one where you’ll drop lead sheets. Map sheet columns to the expected headers (e.g., First Name, Last Name, Company Name for Emails, Person Linkedin Url, Email, Website). Replace placeholders ([put your token here], [put your Telegram Bot Chat ID here]) in the respective nodes. Upload a test spreadsheet to the watched folder and run once to validate the flow. Review results in your sheet (new personalization column) and check Telegram for the completion message. Setup Connect credentials - Google Drive/Sheets, Apify, OpenAI and/or Anthropic, Telegram. Configure the Drive trigger - Select the folder where you’ll upload your lead sheets. Map columns - Ensure your sheet has: First Name, Last Name, Company Name for Emails, Person Linkedin Url, Email, Website. Replace placeholders - In HTTP nodes: Bearer [put your token here]. In Telegram node: [put your Telegram Bot Chat ID here] (Optional) Adjust the recency rule - Current logic checks for current-year posts; change the prompt if you prefer 30-day windows. How to use Upload a test spreadsheet to the watched Drive folder. Execute the workflow once to validate. Open your Google Sheet to see the new personalization column populated. Check Telegram for the completion summary. Customizing this template • Data sources: Add company news, website content, or X/Twitter as fallback signals. • LLM choices: Use only Anthropic or only OpenAI; tweak temperature for tone. • Destinations: Write to a CRM (HubSpot/Salesforce/Airtable) instead of Sheets. • Notifications: Swap Telegram for Slack/Email/Discord. Who it’s for • Sales & SDR teams needing authentic, scalable personalization for cold outreach. • Lead gen agencies enriching spreadsheets with ready-to-use openers. • Marketing & growth teams improving reply rates by referencing real prospect activity. Limitations & compliance • LinkedIn scraping may be rate-limited or blocked; follow platform ToS and local laws. • Costs vary with scraping volume and LLM usage. Need help customizing? Contact me for consulting and support: LinkedIn
by Stefan Joulien
How it works A prospect submits a form with their email and website URL The workflow fetches and cleans the website HTML, extracting key business signals An Analyst Agent reads the content and produces a structured JSON diagnostic (business type, offers, pain points, funnels, copy kit) A Writer Agent converts the diagnostic into a personalised email with 10 actionable improvements, written automatically in the lead's language A branded HTML email is assembled and sent via Gmail with numbered improvement cards, a booking CTA and a professional footer Set up steps Connect your OpenAI API credentials to both AI model nodes (~2 min) Connect your Gmail account to the Send Email node (~1 min) Open the Build Email HTML node and fill in the 6 constants at the top: your name, email, cal.com booking link, Instagram URL, LinkedIn URL and logo image URL (~3 min) Activate the workflow and share the form URL with your leads
by vinci-king-01
Employee Directory Sync – Microsoft Teams & Coda ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow keeps your employee directory perfectly synchronized across your HRIS (or any REST-compatible HR database), Microsoft Teams, Coda docs, and Slack channels. It automatically polls the HR system on a schedule, detects additions or updates, and propagates those changes to downstream tools so everyone always has the latest employee information. Pre-conditions/Requirements Prerequisites An active n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed A reachable HRIS API (BambooHR, Workday, Personio, or any custom REST endpoint) Existing Microsoft Teams workspace and a team/channel for announcements A Coda account with an employee directory table A Slack workspace and channel where directory updates will be posted Required Credentials Microsoft Teams OAuth2** – To post adaptive cards or messages Coda API Token** – To insert/update rows in your Coda doc Slack OAuth2** – To push notifications into a Slack channel HTTP Basic / Bearer Token** – For your HRIS REST endpoint ScrapeGraphAI API Key** – (Only required if you scrape public profile data) HRIS Field Mapping | HRIS Field | Coda Column | Teams/Slack Field | |------------|-------------|-------------------| | firstName| First Name| First Name | | lastName | Last Name | Last Name | | email | Email | Email | | title | Job Title | Job Title | | department| Department| Department | (Adjust the mapping in the Set and Code nodes as needed.) How it works This workflow keeps your employee directory perfectly synchronized across your HRIS (or any REST-compatible HR database), Microsoft Teams, Coda docs, and Slack channels. It automatically polls the HR system on a schedule, detects additions or updates, and propagates those changes to downstream tools so everyone always has the latest employee information. Key Steps: Schedule Trigger**: Fires daily (or at your chosen interval) to start the sync routine. HTTP Request**: Fetches the full list of employees from your HRIS API. Code (Delta Detector)**: Compares fetched data with a cached snapshot to identify new hires, departures, or updates. IF Node**: Branches based on whether changes were detected. Split In Batches**: Processes employees in manageable sets to respect API rate limits. Set Node**: Maps HRIS fields to Coda columns and Teams/Slack message fields. Coda Node**: Upserts rows in the employee directory table. Microsoft Teams Node**: Posts an adaptive card summarizing changes to a selected channel. Slack Node**: Sends a formatted message with the same update. Sticky Note**: Provides inline documentation within the workflow for maintainers. Set up steps Setup Time: 10-15 minutes Import the workflow into your n8n instance. Open Credentials tab and create: Microsoft Teams OAuth2 credential. Coda API credential. Slack OAuth2 credential. HRIS HTTP credential (Basic or Bearer). Configure the HRIS HTTP Request node Replace the placeholder URL with your HRIS endpoint (e.g., https://api.yourhr.com/v1/employees). Add query parameters or headers as required by your HRIS. Map Coda Doc & Table IDs in the Coda node. Select Teams & Slack channels in their respective nodes. Adjust the Schedule Trigger to your desired frequency. Optional: Edit the Code node to tweak field mapping or add custom delta-comparison logic. Execute the workflow manually once to verify proper end-to-end operation. Activate the workflow. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Initiates the sync routine at set intervals. HTTP Request (Get Employees)** – Pulls the latest employee list from the HRIS. Code (Delta Detector)** – Stores the previous run’s data in workflow static data and identifies changes. IF (Has Changes?)** – Skips downstream steps when no changes were detected, saving resources. Split In Batches** – Iterates through employees in chunks (default 50) to avoid API throttling. Set (Field Mapper)** – Renames and restructures data for Coda, Teams, and Slack. Coda (Upsert Rows)** – Inserts new rows or updates existing ones based on email match. Microsoft Teams (Post Message)** – Sends a rich adaptive card with the update summary. Slack (Post Message)** – Delivers a concise change log to a Slack channel. Sticky Note** – Embedded documentation for quick reference. Data Flow: Schedule Trigger → HTTP Request → Code (Delta Detector) Code → IF (Has Changes?) If No → End If Yes → Split In Batches → Set → Coda → Teams → Slack Customization Examples Change Sync Frequency // Inside Schedule Trigger { "mode": "everyDay", "hour": 6, "minute": 0 } Extend Field Mapping // Inside Set node items[0].json.phone = item.phoneNumber ?? ''; items[0].json.location = item.officeLocation ?? ''; return items; Data Output Format The workflow outputs structured JSON data: { "employee": { "id": "123", "firstName": "Jane", "lastName": "Doe", "email": "jane.doe@example.com", "title": "Senior Engineer", "department": "R&D", "status": "New Hire", "syncedAt": "2024-05-08T10:15:23.000Z" }, "destination": { "codaRowId": "row_abc123", "teamsMessageId": "msg_987654", "slackTs": "1715158523.000200" } } Troubleshooting Common Issues HTTP 401 from HRIS API – Verify token validity and that the credential is attached to the HTTP Request node. Coda duplicates rows – Ensure the key column in Coda is set to “Email” and the Upsert option is enabled. Performance Tips Cache HRIS responses in static data to minimize API calls. Increase the Split In Batches size only if your API rate limits allow. Pro Tips: Use n8n’s built-in Version Control to track mapping changes over time. Add a second IF node to differentiate between “new hires” and “updates” for tailored announcements. Enable Slack’s “threaded replies” to keep your #hr-updates channel tidy.
by Shreya Bhingarkar
This n8n workflow automates your entire B2B outreach pipeline from lead discovery to personalized cold email delivery. Submit a form, let Apollo find and enrich your leads, review AI-generated emails in your sheet and send them all with one click. How it works Form Trigger** accepts Job Title, Location and Number of Leads to kick off the workflow Apollo** searches for matching people and enriches each lead with email, phone, LinkedIn URL and company data Duplicate check** runs automatically to skip any leads already in your sheet Leads are saved** to Google Sheet with outreach status set to Pending Manual Trigger** runs the email generation section using Groq LLM to write a personalized cold email per lead Generated emails** are saved to the sheet for review before sending Gmail** sends each email and updates the outreach status to Mail Sent How to use Run Trigger 1 — Form to scrape and enrich leads from Apollo Review leads in your Google Sheet Run Trigger 2 — Manual to generate and send cold emails Update the AI Cold Email Writer node with your company details before running Requirements Apollo** account with API Key Google Sheets** account Groq** account with API Key Gmail** account Customising this workflow Replace Groq with OpenAI or any other LLM for email generation Extend with a follow-up sequence to re-engage leads who did not reply
by John Alejandro SIlva
🤖💬 Smart Telegram AI Assistant with Memory Summarization & Dynamic Model Selection > Optimize your AI workflows, cut costs, and get faster, more accurate answers. 📋 Description Tired of expensive AI calls, slow responses, or bots that forget your context? This Telegram AI Assistant template is designed to optimize cost, speed, and precision in your AI-powered conversations. By combining PostgreSQL chat memory, AI summarization, and dynamic model selection, this workflow ensures you only pay for what you really need. Simple queries get routed to lightweight models, while complex requests automatically trigger more advanced ones. The result? Smarter context, lower costs, and better answers. This template is perfect for anyone who wants to: ⚡ Save money by using cheaper models for easy tasks. 🧠 Keep context relevant with AI-powered summarization. ⏱️ Respond faster thanks to optimized chat memory storage. 💬 Deliver better answers directly inside Telegram. ✨ Key Benefits 💸 Cost Optimization: Automatically routes simple requests to Gemini Flash Lite and reserves Gemini Pro only for complex reasoning. 🧠 Smarter Context: Summarization ensures only the most relevant chat history is used. ⏱️ Faster Workflows: Storing user + agent messages in a single row reduces DB queries by half and saves ~0.3s per response. 🎤 Voice Message Support: Convert Telegram voice notes to text and reply intelligently. 🛡️ Error-Proof Formatting: Safe MarkdownV2 ensures Telegram-ready answers. 💼 Use Case This template is for anyone who needs an AI chatbot on Telegram that balances cost, performance, and intelligence. Customer support teams can reduce expenses by using lightweight models for FAQs. Freelancers and consultants can offer faster AI-powered chats without losing context. Power users can handle voice + text seamlessly while keeping conversations memory-aware. Whether you’re scaling a business or just want a smarter assistant, this workflow adapts to your needs and budget. 💬 Example Interactions Quick Q&A** → Routed to Gemini Flash Lite for fast, low-cost answers. Complex problem-solving** → Sent to Gemini Pro for in-depth reasoning. Voice messages** → Automatically transcribed, summarized, and answered. Long conversations** → Context is summarized, ensuring precise and efficient replies. 🔑 Required Credentials Telegram Bot API** (Bot Token) PostgreSQL** (Database connection) Google Gemini API** (Flash Lite, Flash, Pro) ⚙️ Setup Instructions 🗄️ Create the PostgreSQL table (chat_memory) from the Gray section SQL. 🔌 Configure the Telegram Trigger with your bot token. 🤖 Connect your Gemini API credentials. 🗂️ Set up PostgreSQL nodes with your DB details. ▶️ Activate the workflow and start chatting with your AI-powered Telegram bot. 🏷 Tags telegram ai-assistant chatbot postgresql summarization memory gemini dynamic-routing workflow-optimization cost-saving voice-to-text 🙏 Acknowledgement A special thank you to Davide for the inspiration behind this template. His work on the AI Orchestrator that dynamically selects models based on input type served as a foundational guide for this architecture. 💡 Need Assistance? Want to customize this workflow for your business or project? Let’s connect: 📧 Email: johnsilva11031@gmail.com 🔗 LinkedIn: John Alejandro Silva Rodríguez
by Avkash Kakdiya
How it works This workflow automatically detects completed orders in PostgreSQL and prepares them for AI-based post-purchase communication. It enriches each order with customer, product, and payment data, then generates a personalized message using an AI agent. The message is delivered via email and WhatsApp and finally logged in Google Sheets for tracking and auditing. Step-by-step Step 1: Fetch and prepare completed orders for AI processing** Postgres Trigger – Watches the orders table for updates and initiates the workflow. Postgres (Execute query) – Fetches only orders marked as completed. Split In Batches – Loops through completed orders safely and sequentially. Postgres (Execute query) – Retrieves full customer, product, and payment details using joins. AI Agent – Generates a personalized post-purchase message using order data. Groq Chat Model – Supplies the language model used by the AI agent. Merge – Combines AI-generated text with database results for downstream use. Step 2: Deliver messages and log post-purchase communication** Code – Formats AI output into clean email and WhatsApp message templates. Gmail – Sends the post-purchase email to the customer. WhatsApp – Sends the same message via WhatsApp. Set – Flags email and WhatsApp messages as successfully sent. Google Sheets – Appends customer, order, and communication details. Wait – Pauses before continuing to process the next completed order. Why use this? Automates post-purchase communication with zero manual effort. Ensures consistent, personalized messaging across email and WhatsApp. Adapts message tone automatically based on payment status. Creates a centralized audit log in Google Sheets. Scales easily as order volume grows.
by Abdullah Alshiekh
What Problem Does It Solve? Staying up-to-date with AI and LLM developments requires reading dozens of articles every week. Manual research is time-consuming and often leads to “information overload” or reading low-quality clickbait. Important technical breakthroughs often get buried under marketing fluff. This workflow solves these by: Leveraging Decodo to instantly find and scrape high-quality organic articles (automatically filtering out YouTube/video noise). Using AI to read and summarize every article individually. Using an "Analyst Agent" to score news by relevance and write a single, high-quality intelligence report. How to Configure It Decodo & API Setup Decodo:** Connect your Decodo credentials. This is the core engine that handles the high-precision Google Search and content scraping. OpenAI:** Connect your OpenAI API key (GPT-4o or 4.1-mini recommended for best analysis). Gmail:** Connect a Google Service Account or Gmail OAuth to send the emails. Search Configuration Open the Set Search Config node. Edit the search_query value to match your niche (e.g., "Latest Large Language Model benchmarks" or "Generative AI in Healthcare"). How It Works Trigger:** The workflow wakes up once a week (customizable). Search (Powered by Decodo):** It searches Google using Decodo's organic results filter to ensure only high-quality reading material is selected. Scraping:** It visits every URL found and extracts the raw text, cleaning up HTML tags. Summarization:** An LLM reads each article individually to extract key technical points. Analyst Agent:** Reviews all summaries, assigns a "Relevance Score", and compiles the final newsletter. Delivery:** The final report is emailed to you immediately. Customization Ideas Change the topic to any industry (Crypto, Finance, Sports). Swap the AI model for Claude or DeepSeek. Log the summaries into a Notion database. If you need any help Get In Touch