by Automate With Marc
✉️ Telegram Email Agent with GPT + Gmail Category: Messaging / AI Agent Level: Beginner-Friendly Tags: Telegram, Email Automation, AI Agent, Gmail, GPT Model Watch Step-by-step video guide here: https://www.youtube.com/watch?v=nyI40s9QOuw&t=420s&pp=0gcJCb4JAYcqIYzv 🤖 What This Workflow Does This workflow turns your Telegram bot into a personal email assistant powered by AI. With just a message on Telegram, users can: Send an email via Gmail Automatically generate the email content using OpenAI Models. Get confirmation or responses directly in Telegram It's like ChatGPT meets Gmail, inside your Telegram chat. 🔧 How It Works Telegram Trigger – Listens for incoming messages from your bot. AI Agent – Processes the input using OpenAI Model and converts it into structured email content (To, Subject, Body). Memory Node – Stores short-term context per user (via chat ID), so the agent can hold simple conversations. Gmail Node – Sends the generated email using your Gmail account. Telegram Node – Replies to the user confirming the output or status. 🧠 Why This is Useful Ever wanted to send an email while on the go, without typing the whole thing out in Gmail? This is a fast, intuitive, and AI-powered way to: Dictate or draft emails from anywhere Create an AI-powered virtual assistant via Telegram Integrate n8n's Langchain Agent with real-world productivity use cases 🪜 Setup Instructions Connect your Telegram bot via BotFather and add the credentials in n8n. Set up your OpenAI API key (GPT-4o-mini recommended). Add your Gmail OAuth credentials. Activate the workflow and start messaging your bot!
by NovaNode
Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering. What problem is this workflow solving? Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI. What these workflows do Workflow 1: Document Ingestion & Indexing Manually triggered to import product documentation from Google Docs. Automatically splits large documents into chunks for efficient searching. Generates vector embeddings for each chunk using OpenAI embeddings. Inserts the embedded chunks and metadata into a MongoDB Atlas vector store, enabling fast semantic search. Workflow 2: AI-Powered Query & Response Listens for incoming user questions (can be extended to webhook). Converts questions to vector embeddings and performs similarity search on MongoDB vector store. Uses OpenAI’s GPT-4o-mini model with retrieval-augmented generation to produce direct, context-aware answers. Maintains short-term conversation context using a memory buffer node. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat Configure the AI system prompt in the “Knowledge Base Agent” node to reflect your company’s tone, answering style, and any business rules. Update the workflow description and instructions to help users understand the chat’s purpose and capabilities. Connect the MongoDB collection used for vector search in the chat workflow and update the vector search index if needed to match your setup. Make sure Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection, with: An embedding field storing vector data, Relevant metadata fields (e.g., document ID, source), and The same vector index name configured (e.g., data_index). Search Index Example: { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } }
by Abdul Mir
Company Website Chatbot Agent Overview This workflow implements a modular Website AI Chatbot Assistant capable of handling multiple types of customer interactions autonomously. Instead of relying on a single large agent to handle all logic and tools, this system routes user queries to specialized sub-agents—each dedicated to a specific function. By using a manager-style orchestration layer, this approach prevents overloading a single AI model with excessive context, leading to cleaner routing, faster execution, and easier scaling as your automation needs grow. How It Works 1. Chat Trigger The flow is initiated when a chat message is received via the website widget. 2. Manager Agent (Ultimate Website AI Assistant) The central LLM-based agent is responsible for parsing the message and deciding which specialized sub-agent to route it to. It uses an OpenAI GPT model for natural language understanding and a lightweight memory system to preserve recent context. 3. Sub-Agent Routing calendarAgent: Handles availability checks and books meetings on connected calendars. RAGAgent: Searches company documentation or FAQs to provide accurate responses from your internal knowledge base. ticketAgent: Forwards requests to human support by generating and sending support tickets to a designated email. Setup Instructions Embed the Chatbot Use a custom HTML widget or script to embed the chatbot interface on your website. Connect the frontend to the webhook that triggers the When chat message received node. Configure Your OpenAI Key Insert your API key in the OpenAI Chat Model node. Adjust the model parameters for temperature, max tokens, etc., based on how formal or creative you want the bot to be. Customize Sub-Agents calendarAgent: Connect to your Google or Outlook calendar. RAGAgent: Link to a vector store or document database via API or native integration. ticketAgent: Set the destination email and format for ticket generation (e.g. via SendGrid or SMTP). Deploy in Production Host on n8n Cloud or your self-hosted instance. Monitor usage through the Executions tab and refine prompts based on user behavior. Benefits Modular system with dedicated logic per function Reduces token bloat by offloading complexity to sub-agents Easy to scale by adding more tools (e.g. CRM, analytics) Fast and responsive user experience for customers on your site Cleaner code structure and easier debugging
by Yang
👤 Who is this for? This workflow is ideal for social media managers, personal brand strategists, ghostwriters, and founders who want to post regularly on LinkedIn without spending hours writing from scratch. It’s also useful for marketing agencies and assistants looking to automate consistent post creation using curated articles as source material. 🧩 What problem does this workflow solve? Manually reading multiple articles, extracting key insights, and writing a clean, professional LinkedIn post is a time-consuming process. This workflow automates everything: from pulling topics, finding related articles, summarizing them using AI, and even generating a matching image to accompany the post. It ensures faster content turnaround, more consistency, and less manual effort. 🔁 What this workflow does This workflow starts manually and retrieves one topic marked as “To do” from a Google Sheet. That topic is used as a search term for Dumpling AI’s search endpoint, which scrapes and returns the top three article contents related to the topic. These articles are sent to a LangChain agent powered by GPT-4o, which analyzes and summarizes the content into a LinkedIn post in a friendly, insightful tone. It also generates an image prompt for the post. After generating the post and image prompt, the data is extracted using a Set node. The prompt is sent to Dumpling AI’s image generation endpoint, which returns an image URL. Finally, the post text, image prompt, image URL, and status update (“created”) are saved back to the original row in Google Sheets. 🛠️ Workflow Breakdown Manual Trigger – Starts the automation. Google Sheets (Get Topic) – Searches for the first row in your content pipeline sheet where the “status” is “To do”. HTTP Request (Dumpling AI Search) – Uses the topic as a search query to pull 3 article contents using Dumpling AI’s API. Set LangChain GPT Model – Defines GPT-4o as the LLM for the LangChain Agent. LangChain Agent (Summarize & Generate) – Summarizes all 3 articles and generates a LinkedIn post and a related image prompt. Set (Extract Data) – Extracts postText and imagePrompt from the LangChain agent output. HTTP Request (Dumpling Image Gen) – Sends imagePrompt to Dumpling AI’s image generation endpoint. Update Google Sheets – Writes the post, image prompt, and image URL back to the sheet and changes the row status to “created”. ⚙️ Setup Instructions Dumpling AI Sign up at Dumpling AI Get your API key and connect it in the HTTP Request nodes (Search and Image endpoints) Use the /search endpoint to retrieve article content Use the /generate-image endpoint to create the image Google Sheets Create a spreadsheet with columns: topic, status, postText, imagePrompt, imageURL Add sample topics and set their status to To do LangChain (GPT-4o) Connect your OpenAI credentials to n8n Make sure GPT-4o is available in your OpenAI account Use the LangChain node to process multi-input summarization and generate a social media caption Customize the Prompt (Optional) Adjust the Set node to tweak the input format sent to the LangChain agent Add constraints like tone, hashtags, or emojis to fit your brand style 🧠 How to Customize This Workflow Change the content source (RSS feed, Notion DB, etc.) instead of Google Sheets Add a scheduler node to run this automatically every morning or weekly Use Airtable instead of Google Sheets for more control and filtering Send the final post to LinkedIn using the Buffer or LinkedIn API Add a Telegram or Slack notification when new content is ready for approval
by Zain Ali
🧾 Generate Project Summary from meeting transcript Who’s it for 🤝 Project managers looking to automate client meeting summaries Client success teams needing structured deliverables from transcripts Agencies and consultants who want consistent, repeatable documentation How it works / What it does ⚙️ Trigger: Manual or webhook trigger kicks off the workflow. Get meeting transcript: Reads the raw transcript from a specified Google Docs file. Generate summary: Sends transcript + instructions to OpenAI (gpt-4.1-mini) to produce a structured project summary. Convert to HTML: Transforms the LLM-generated Markdown into styled HTML. Prepare request: Wraps HTML and metadata into a multipart request body. Create Google Doc: Uploads the new “Project Summary” document into your Drive folder. How to set up 🛠️ Credentials Google Docs & Drive OAuth2 credentials OpenAI API key (gpt-4.1-mini) Nodes configuration Manual Trigger / webhook node Google Docs “Get meeting transcript” node: set documentURL AI Chat Model node: select gpt-4.1-mini Markdown node: enable tables & emoji Google Drive “CreateGoogleDoc” node: set target folder ID Paste in your IDs Update documentURL to your transcript doc Update google_drive_folder_id in the Set node Execute Click “Execute Workflow” or call via webhook Requirements 📋 n8n Google OAuth2 scopes for Docs & Drive OpenAI account with GPT-4.1-mini access A Google Drive folder to store summaries How to customize ✨ Output format**: Edit the Markdown prompt in the ChainLlm node to adjust headings or tone Timeline section**: Extend LLM prompt template with your own phase table Styling**: Tweak inline CSS in the Code node (Prepare_Request) for fonts or margins Trigger**: Swap Manual Trigger for HTTP/Webhook trigger to integrate with other tools Language model**: Upgrade to a different model by changing model.value in the AI node
by JaredCo
This n8n workflow demonstrates how to transform natural language date and time expressions into structured data with 96%+ accuracy. Parse complex expressions like "early next July", "2 weeks after project launch", or "end of Q3" into precise datetime objects with confidence scoring, timezone intelligence, and business rules validation for any automation workflow. Good to know Achieves 96%+ accuracy on complex natural language date expressions At time of writing, this is the most advanced open-source date parser available Includes AI learning that improves over time with user corrections Supports 6 languages with auto-detection (English, Spanish, French, German, Italian, Portuguese) Sub-millisecond response times with intelligent caching Enterprise-grade with business intelligence and timezone handling How it works Natural Language Input**: Receives date expressions via webhook, form, email, or chat AI-Powered Parsing**: Your world-class date parser processes the text through: 50+ custom rule patterns for complex expressions Multi-language auto-detection and smart translation Confidence scoring (0.0-1.0) for AI decision-making Ambiguity detection with helpful suggestions Business Intelligence**: Applies enterprise rules automatically: Holiday calendar awareness (US + International) Working hours validation and warnings Business day auto-adjustment Timezone normalization (IANA format) Smart Scheduling**: Creates calendar events with: Structured datetime objects (start/end times) Confidence metadata for workflow decisions Alternative interpretations for ambiguous inputs Rich context for follow-up actions Integration Ready**: Outputs connect seamlessly to: Google Calendar, Outlook, Apple Calendar CRM systems (HubSpot, Salesforce) Project management tools (Notion, Asana) Communication platforms (Slack, Teams) How to use The webhook trigger receives natural language date requests from any source Replace the MCP server URL with your deployed date parser endpoint Configure timezone preferences for your organization Customize business rules (working hours, holidays) in the parser settings Connect calendar integration nodes for automatic event creation Add notification workflows for scheduling confirmations Use Cases Meeting Scheduling**: "Schedule our quarterly review for early Q3" Project Management**: "Set deadline 2 weeks after product launch" Event Planning**: "Book venue for the weekend before Labor Day" Personal Assistant**: "Remind me about dentist appointment next Tuesday morning" International Teams**: "Team standup tomorrow morning" (auto-timezone conversion) Seasonal Planning**: "Launch campaign in late spring 2025" Requirements Natural Language Date Parser MCP server (provided code) Webhook endpoint or form trigger Calendar integration (Google Calendar, Outlook, etc.) Optional: Slack/Teams for notifications Optional: Database for learning pattern storage Customizing this workflow Multi-language Support**: Enable auto-detection for global teams Business Rules**: Configure company holidays and working hours Learning System**: Enable AI learning from user corrections Integration Depth**: Connect to your existing calendar and CRM systems Confidence Thresholds**: Set minimum confidence levels for auto-scheduling Ambiguity Handling**: Route unclear dates to human review or clarification requests Sample Input/Output Input Examples: "early next July" "2 weeks after Thanksgiving" "next Wednesday evening" "Q3 2025" "mañana por la mañana" (Spanish) "first thing Monday" Rich Output: { "parsed": [{ "start": "2025-07-01T00:00:00Z", "end": "2025-07-10T23:59:59Z", "timezone": "America/New_York" }], "confidence": 0.95, "method": "custom_rules", "business_insights": [{ "type": "business_warning", "message": "Selected date range includes July 4th holiday" }], "predictions": [{ "type": "time_preference", "suggestion": "You usually schedule meetings at 10 AM" }], "ambiguities": [], "alternatives": [{ "interpretation": "Early July 2026", "confidence": 0.15 }], "performance": { "cache_hit": true, "response_time": "0.8ms" } } Why This Workflow is Unique World-Class Accuracy**: 96%+ success rate on complex expressions AI Learning**: Improves over time with user feedback Global Ready**: Multi-language and timezone intelligence Business Smart**: Enterprise rules and holiday awareness Performance Optimized**: Sub-millisecond cached responses Context Aware**: Provides confidence scores and alternatives for AI decision-making Transform your scheduling workflows from rigid form inputs to natural, conversational date requests that your users will love!
by Alexander Bentlund
Search music and play to Spotify from Telegram This workflow is a simple demonstration on accessing a message model from Telegram and it makes searching for songs an easy task even if you can't remember the artist or song name. An OpenAI message model tries to figure out the song and sends it to an active Spotify device**. Use case Imagine an office where you play music in the background and the employees can control the music without having to login to the playing account. How it works You describe the song in Telegram. Telegram bot sends the text to n8n. An OpenAI message model tries to find the song. Spotify gets the search query string. First match is then added to queue. -- If there is no match a message is sent to Telegram and the process ends. We change to the next track in the list. We make sure the song starts playing by trying to resume. We fetch the currently playing track. We return "now playing" information to Telegram: Song Name - Artist Name - Album Name. Error handling Every Spotify step has it's on error handler under settings where we output the error. Message parser receives the error and sends it to Telegram. Requirements Active workflow* OpenAI API key Telegram bot Spotify account and Oauth2 API Spotify active on a device** .* The Telegram trigger is activated only if this workflow is active. You can however TEST the workflow in the editor by clicking "Test step" and then it waits for the Telegram event. When event is received, just step through all steps or just clicking "Test step" on the "Fetch Now Playing" node. .** You must have a Spotify device active when trying to communicate with a device. Open Spotify and play something - not it is active.
by n8n Team
This workflow automatically syncs Shopify orders with your Zendesk tickets. Using this workflow, Shopify orders will be added or have their information updated straight to your Zendesk tickets. Prerequisites Shopify account and Shopify credentials Zendesk account and Zendesk credentials How it works Shopify Trigger starts the workflow whenever an order is updated. Zendesk node finds if the order already exists and has a ticket assigned. Set node keeps and passes only ticket ID. Merge by Key node combines the Shopify order data with the Zendesk ticket data. If node splits the workflow conditionally, checks if the ticket already exists or not. If order is new, Zendesk node creates a new ticket for the order.
by Hostinger
Quickly transform any LinkedIn profile URL into a concise, AI‑generated professional summary — perfect for recruiters, sales teams, and hiring managers who need instant insights into prospects or candidates without manual research. How it works The workflow polls a Google Sheet for new or updated rows containing LinkedIn profile URLs. For each URL, the Real‑Time LinkedIn Scraper API (via RapidAPI) pulls experience and education sections. Extracted profile data is sent to OpenAI’s GPT model, which generates a clean, structured summary highlighting key strengths, career trajectory, and differentiators. The generated summary is written back into a new column in the same row of your Google Sheet for easy review and sharing. Set up steps Connect your Google account and select the spreadsheet + worksheet containing your list of LinkedIn URLs. Sign up for the Real‑Time LinkedIn Scraper API on RapidAPI, copy your API key, and add it to the workflow’s HTTP Request node. Insert your OpenAI API key credentials. Ensure your Google Sheet has one column for “linkedin_url” and create two empty columns named “full_name” and "summary" (or customize them based on your needs). Run a single row through the workflow to verify scraping accuracy and summary formatting, then turn on the workflow for continuous automation. With this template, eliminate hours of manual profile review — instantly gain actionable insights and focus on what really matters: building relationships and closing deals.
by Tomek
How it works Use Telegram to send in new phrases (flashcard front) You can also manually input phrase in the workflow itself ChatGPT generates provided phrase description (in English but you can change it) including multiple meanings & generates examples of using the phrase in a sample sentence (flashcard back) Steps to setup Provide your Telegram bot API key (optional) Provide your OpenAI key Provide Google Sheets credentials How to import flashcards from Google Sheets into Anki Use Google Sheets to Anki add-on: 1871608121 In Anki simply click Sync Decks and you're done :) Enjoy
by Jon Bungartz
How it works creates a new page in Confluence based on a page template also defined in Confluence replaces any number of placeholders with data from your workflow generic implementation for maximum flexibility Set up steps All parameters you need to change are defined in the Set node Set your Atlassian-domain Set the template id you want to use as the basis for new pages Set the target space and parent page for new pages added based on that template. 🎥 Explainer video has all the details. =) Feedback Any feedback is welcome. If you have ideas for improvements, let me know.
by Yang
📄 What this workflow does This workflow helps you analyze Google reviews of any business to generate powerful marketing insights. By simply submitting a business name and its Google Place ID, it fetches the top 30 reviews and uses GPT-4 (via LangChain Agent) to extract valuable customer insights such as marketing angles, customer motivations, product pain points, and voice of customer (VOC) quotes. The output is stored automatically in a connected Google Sheet. 👤 Who is this for Marketing teams looking for messaging inspiration Founders or product managers exploring customer feedback Brand strategists gathering real-world insights Agencies running VOC or sentiment analysis 🛠️ Requirements Dumpling AI API key** OpenAI GPT-4 or GPT-4o access** Google Sheets connection** A form or manual input with: Business Name Google Place ID ⚙️ How to set up Connect Credentials Dumpling AI (via HTTP Header Auth) OpenAI (GPT-4) Google Sheets (OAuth2) Prepare your Google Sheet Create columns: Business Name, Place ID, Marketing Angles, Customer Motivations, Frictions and Barriers, Product Opportunities, VOC Snippets Update Nodes Replace the Google Sheets Document ID and Tab Name with yours Check that the Dumpling API node is linked to your credential Optional: tweak the prompt in the LangChain Agent node to fit your tone or goals 🤖 How it works (Workflow Steps) User submits business name + Google Place ID Dumpling AI fetches top 30 reviews Workflow aggregates review text GPT-4 via LangChain analyzes the reviews Insights are parsed and logged to Google Sheets 💡 Customization Ideas Push output to Notion, Airtable, or Slack Add sentiment scoring to prioritize themes Create summaries for each insight category Schedule insights to be emailed weekly This is a plug-and-play VOC research workflow — great for founders, marketers, and product teams who want actionable data from real customers without doing manual review scraping or summarizing.