by Yang
Who is this for? This template is designed for content creators, marketing teams, educators, or media managers who want to repurpose video content into written blog posts with visuals. It's ideal for anyone looking to automate the process of transforming YouTube videos into professional blog articles and custom images. What problem is this workflow solving? Creating written content from video material is time-consuming and manual. This workflow solves that by automating the entire pipeline: from detecting new YouTube video uploads to transcribing the audio, turning it into an engaging blog post, generating a matching visual, and saving both in Airtable. It saves hours of work while keeping your blog or social feed active and consistent. What this workflow does This automation listens for new YouTube videos added to a Google Drive folder, extracts the full transcript using Dumpling AI, and sends it to GPT-4o to generate a blog post and image prompt. Dumpling AI then turns the prompt into a 16:9 visual. The blog and visual are saved into Airtable for easy publishing or curation. Setup Google Drive Trigger Create a folder in Google Drive and upload your YouTube videos there. Link this folder in the "Watch Folder for New YouTube Videos" node. Enable polling every minute or adjust as needed. Download & Prepare the Video The video is downloaded and converted into base64 format by the next two nodes: Download Video File and Convert Downloaded Video to Base64. Transcription with Dumpling AI The base64 video is sent to Dumpling AI’s extract-video endpoint. You must have a Dumpling AI account and an API key with access to this endpoint: Dumpling AI Docs Generate Blog Content with GPT-4o GPT-4o takes the transcript and generates: A human-like blog post A descriptive prompt for AI image generation Make sure your OpenAI credentials are configured. Generate the Visual The prompt is passed to Dumpling AI’s generate-ai-image endpoint using model FLUX.1-pro. The result is a clean 1024x576 image. Save to Airtable Blog content is stored under the Content field in Airtable. The image prompt is also added to the Attachments column as a visual reference. Ensure Airtable base and table are preconfigured with the correct field names. How to customize this workflow to your needs Change the GPT prompt to alter the tone or format of the blog post (e.g., add bullet points or SEO tags). Modify the Dumpling AI prompt to generate different image styles. Add a scheduler or webhook trigger to run at different intervals or through other integrations. Connect this output to Ghost, Notion, or your CMS using additional nodes. 🧠 Sticky Note Summary Part 1: Transcription & Blog Prompt Watches a Google Drive folder for new video uploads. Downloads and encodes the video. Transcribes full audio with Dumpling AI. GPT-4o writes a blog post and descriptive image prompt. Part 2: Image Generation & Airtable Save Dumpling AI generates a visual from the image prompt. Blog content is saved to Airtable. The image prompt is patched into the Attachments field in the same record. ✅ Use this if you want to automate repurposing YouTube videos into blog content with zero manual work.
by Aji Prakoso
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow provides a complete, ready-to-use template for a Retrieval-Augmented Generation (RAG) system. It allows you to build a powerful AI chatbot that can answer questions based on the content of PDF documents you provide, using a modern and powerful stack for optimal performance. Good to know Costs:** This workflow uses paid services (OpenAI, Pinecone, Cohere). Costs will be incurred based on your usage. Please review the pricing pages for each service to understand the potential expenses. Video Tutorial (Bahasa Indonesia):** For a step-by-step guide on how this workflow functions, you can watch the accompanying video tutorial here: N8N Tutorial: Membangun Chatbot RAG dengan Pinecone, OpenAI, & Cohere How it works This workflow operates in two distinct stages: 1. Data Ingestion & Indexing: It begins when a .pdf file is uploaded via the n8n Form Trigger. The Default Data Loader node processes the PDF, and the Recursive Character Text Splitter breaks down the content into smaller, manageable chunks. The Embeddings OpenAI node converts these text chunks into vector embeddings (numerical representations). Finally, the Pinecone Vector Store node takes these embeddings and stores (upserts) them into your specified Pinecone index, creating a searchable knowledge base. 2. Conversational AI Agent: A user sends a message through the Chat Trigger. The AI Agent receives the message and uses its VectorDB tool to search the Pinecone index for relevant information. The Reranker Cohere node refines these search results, ensuring only the most relevant context is selected. The user's original question and the refined context are sent to the OpenAI Chat Model (gpt-4.1), which generates a helpful, context-aware answer. The Simple Memory node maintains conversation history, allowing for natural, multi-turn dialogues. How to use Using this workflow is a two-step process: Populate the Knowledge Base: First, you need to add documents. Trigger the workflow by using the Form Trigger and uploading a PDF file. Wait for the execution to complete. You can do this for multiple documents. Start Chatting: Once your data has been ingested, open the Chat Trigger's interface and start asking questions related to the content of your uploaded documents. The Form Trigger is just an example. Feel free to replace it with other triggers, such as a node that watches a Google Drive or Dropbox folder for new files. Requirements To run this workflow, you will need active accounts and API keys for the following services. OpenAI Account & API Key:** Function: Powers text embedding and the final chat generation. Required for the Embeddings OpenAI and OpenAI Chat Model nodes. Pinecone Account & API Key:** Function: Used to store and retrieve your vector knowledge base. Required for the Pinecone Vector Store and VectorDB nodes. You also need to provide your Pinecone Environment. Cohere Account & API Key:** Function: Improves the accuracy of your chatbot by re-ranking search results for relevance. Required for the Reranker Cohere node. Customising this workflow This template is a great starting point. Here are a few ways you can customize it: Change the AI Personality:* Edit the *System Message** in the AI Agent node to change the bot's behavior, tone, or instructions. Use Different Models:** You can easily swap the OpenAI model for another one (e.g., gpt-3.5-turbo for lower costs) in the OpenAI Chat Model node. Adjust Retrieval:** In the VectorDB tool node, you can modify the Top K parameter to retrieve more or fewer document chunks to use as context. Automate Ingestion:** Replace the manual Form Trigger with an automated one, like a node that triggers whenever a new file is added to a specific cloud storage folder.
by explorium
Google Sheets Company Enrichment with Explorium MCP Template Download the following json file and import it to a new n8n workflow: google\_sheets\_enrichment.json Overview This n8n workflow template enables automatic enrichment of company information in your Google Sheets. When you add a new company or update existing company details (name or website), the workflow automatically fetches additional business intelligence data using Explorium MCP and updates your sheet with: Business ID NAICS industry code Number of employees (range) Annual revenue (range) Key Features Automatic Triggering**: Monitors your Google Sheet for new rows or updates to company name/website fields Smart Processing**: Only processes new or modified rows, not the entire sheet Data Validation**: Ensures both company name and website are present before processing Error Handling**: Processes each row individually to prevent one failure from affecting others Powered by AI**: Uses Claude Sonnet 4 with Explorium MCP for intelligent data enrichment Prerequisites Before setting up this workflow, ensure you have: n8n instance (self-hosted or cloud) Google account with access to Google Sheets Anthropic API key for Claude Explorium MCP API key Installation & Setup Step 1: Import the Workflow Create a new workflow. Download the workflow JSON from above. In your n8n instance, go to Workflows → Add Workflow → Import from File Select the JSON file and click Import Step 2: Create Google Sheet Create a new google sheet (or make a copy of this template) Your Google Sheet must have the following columns (exact names): name - Company name website - Company website URL business_id - Will be populated by the workflow naics - Will be populated by the workflow number_of_employees_range - Will be populated by the workflow yearly_revenue_range - Will be populated by the workflow Step 3: Configure Google Sheets Credentials You'll need to set up two Google credentials: Google Sheets Trigger Credentials: Click on the Google Sheets Trigger node Under Credentials, click Create New If working on n8n Cloud, Click the 'Sign in with Google' button Grant permissions to read and monitor your Google Sheets If working on n8n Instance, Follow the OAuth2 authentication process here Fill the Client ID and Client Secret fields Google Sheets Update Credentials: Click on the Update Company Row node Under Credentials, select the same credentials or create new ones (The same you did above) Ensure permissions include write access to your sheets Step 4: Configure Anthropic Credentials Click on the Anthropic Chat Model node Under Credentials, click Create New Enter your Anthropic API key Save the credentials Step 5: Configure Explorium MCP Credentials Click on the MCP Client node Under Credentials, click Create New (Header Auth) Fill the Name field with api_key Fill the Value field with your Explorium API Key Save the credentials Step 6: Link Your Google Sheet In the Google Sheets Trigger node: Select your Google Sheet from the dropdown Select the worksheet (usually "Sheet1") In the Update Company Row node: Select the same Google Sheet and worksheet Ensure the matching column is set to row_number Step 7: Activate the Workflow Click the Active toggle in the top right to activate the workflow The workflow will now monitor your sheet every minute for changes How It Works Workflow Process Flow Google Sheets Trigger: Polls your sheet every minute for new rows or changes to name/website fields Filter Valid Rows: Validates that both company name and website are present Loop Over Items: Processes each company individually AI Agent: Uses Explorium MCP to: Find the company's business ID Retrieve firmographic data (revenue, employees, NAICS code) Format Output: Structures the data for Google Sheets Update Company Row: Writes the enriched data back to the original row Trigger Behavior First Activation**: May process all existing rows to establish a baseline Ongoing Operation**: Only processes new rows or rows where name/website fields change Polling Frequency**: Checks for changes every minute Usage Adding New Companies Add a new row to your Google Sheet Fill in the name and website columns Within 1 minute, the workflow will automatically: Detect the new row Enrich the company data Update the remaining columns Updating Existing Companies Modify the name or website field of an existing row The workflow will re-process that row with the updated information All enrichment data will be refreshed Monitoring Executions In n8n, go to Executions to see workflow runs Each execution shows: Which rows were processed Success/failure status Detailed logs for troubleshooting Troubleshooting Common Issues All rows are processed instead of just new/updated ones Ensure the workflow is activated, not just run manually Manual test runs will process all rows First activation may process all rows once No data is returned for a company Verify the company name and website are correct Check if the company exists in Explorium's database Some smaller or newer companies may not have data available Workflow isn't triggering Confirm the workflow is activated (Active toggle is ON) Check that changes are made to the name or website columns Verify Google Sheets credentials have proper permissions Authentication errors Re-authenticate Google Sheets credentials Verify Anthropic API key is valid and has credits Check Explorium Bearer token is correct and active Error Handling The workflow processes each row individually, so if one company fails to enrich: Other rows will still be processed The failed row will retain its original data Check the execution logs for specific error details Best Practices Data Quality: Ensure company names and websites are accurate for best results Website Format: Include full URLs (https://example.com) rather than just domain names Batch Processing: The workflow handles multiple updates efficiently, so you can add several companies at once Regular Monitoring: Periodically check execution logs to ensure smooth operation API Limits & Considerations Google Sheets API**: Subject to Google's API quotas Anthropic API**: Each enrichment uses Claude Sonnet 4 tokens Explorium MCP**: Rate limits may apply based on your subscription Support For issues specific to: n8n platform**: Consult n8n documentation or community Google Sheets integration**: Check n8n's Google Sheets node documentation Explorium MCP**: Contact Explorium support for API-related issues Anthropic/Claude**: Refer to Anthropic's documentation for API issues Example Use Cases Sales Prospecting: Automatically enrich lead lists with company size and revenue data Market Research: Build comprehensive databases of companies in specific industries Competitive Analysis: Track and monitor competitor information Investment Research: Gather firmographic data for potential investment targets
by Automate With Marc
🤖 AI Customer Support Agent with Google Docs Knowledge (Telegram + OpenAI) This no-code workflow turns your Telegram bot into an intelligent, always-on AI support agent that references your business documentation in Google Docs to respond to customer queries—instantly and accurately. Watch full step-by-step video tutorial of the build here: https://youtu.be/Mlv7CjGO7wI 🔧 How it works: Telegram Trigger – Captures incoming messages from users on your Telegram bot Langchain AI Agent (OpenAI GPT) – Interprets the message and uses RAG (retrieval-augmented generation) techniques to craft an answer Google Docs Tool – Connects to and retrieves context from your specified Google Doc (e.g. FAQ, SOPs, policies) Memory Buffer – Keeps track of recent chat history for more human-like conversations Telegram Reply Node – Sends the AI-generated response back to the user 💡 Use Cases: E-commerce customer service SaaS product onboarding Internal helpdesk bot for teams WhatsApp-style support for digital businesses 🧠 What makes this powerful: Supports complex questions by referencing a live Google Doc knowledge base Works in plain conversational language (no buttons or forms needed) Runs 24/7 with zero code Easily extendable to Slack, WhatsApp, or email support 🛠️ Tools used: Telegram Node (trigger + send) Langchain Agent with OpenAI GPT Google Docs Tool Memory Buffer Sticky Notes for easy understanding
by Parth Pansuriya
AI-Powered Daily Gmail Digest Summary using LangChain & OpenRouter This n8n template helps you automatically summarize your daily Gmail messages using OpenRouter's GPT model via LangChain. It generates a structured email digest highlighting key information, tasks, issues, and action items — all delivered to your inbox every morning. Who’s it for Busy professionals who want a quick overview of their daily emails Founders or managers needing to track team or client communication Anyone looking to automate inbox triage and reduce time spent on emails How it works / What it does This n8n workflow runs every morning at 7 AM, automatically: Fetches emails from the last 24 hours Collects important fields: sender, subject, and snippets Feeds them into an AI-powered agent (OpenRouter + LangChain) The AI: Extracts key topics, tasks, deadlines, and issues Formats the info clearly with a bullet-point summary Sends the final summarized report to your inbox How to set up Clone or import the workflow into your n8n instance Replace <Your Email ID> in the Code node with your actual Gmail address (or remove if not needed) Ensure your Gmail and OpenRouter credentials are set up in n8n Update the recipient email in the Send Summary node if you want it sent to a fixed address Activate the workflow once tested How to customize the workflow Change Summary Style:** Edit the system message in the LangChain Agent to match your tone (e.g. casual, business, detailed) Adjust Digest Time:** Change the Schedule Trigger to any preferred hour Customize Recipients:** Change or add recipients dynamically or statically in the Gmail send node Filter Email Type:** Modify the Gmail query in the Code node to include filters like from:, is:unread, subject:project
by Jimleuk
This n8n template offers a simple yet capable chatbot assistant who can answer course enquiries over SMS. Given the right access to data, AI Agents are capable of planning and performing relatively complex research tasks to get their answers. In this example, the agent must first understand the database schema, retrieve lists of values before generating it's own query to search over the database. Checkout the example database here - https://airtable.com/appO5xvP1aUBYKyJ7/shr8jSFDaghubDOrw How it works A Twilio trigger gives us the ability to receive SMS input into our workflow via webhook. The message is then directed to our AI agent who is instructed to assist the user and use the course database as reference. The database is an Airtable base. The agent autonomously figures out which tool it needs to use and generates it's own "filter_by_formula" query to search over the available courses. On successful search results, the Agent can then use this information to answer the user's query. The Agent's output is logged in a second sheet of the Airtable base. We can use this later for analysis and lead gen. Finally, the response is sent back to the user through SMS using Twilio. How to use Ensure your Twilio number is set to forward messages to this workflow's webhook URL. Configure and update the course database as required. If you're not interested in courses, you can swap this out for inventory, deliveries or any other data relevant to your business. Ask questions like: "Can you help me find suitable courses to fill my Wednesday mornings?" "Which courses are being instructed by profession Lee?" "I'm interested in creative arts. What courses are available which could be relevant to me?" Requirements Twilio for SMS receiving and sending OpenAI for LLM and Agent Airtable for Course Database Customising this workflow Add additional tools and expand the range of queries the agent is able to answer or assist with. Not using Airtable? This technique also works with SQL databases like PostgreSQL.
by Mohan Gopal
Overview This release introduces a Voice-Enabled Tour Recommendation System that leverages n8n, ElevenLabs Voice Agent, OpenAI GPT-4o, and Pinecone Vector DB to deliver personalized travel itineraries based on spoken input. Users speak their preferences to the ElevenLabs voice agent, which then triggers an n8n workflow that returns a tailored tour plan. Features Voice interaction with AI-powered travel agent via ElevenLabs Uses ChatGPT-4o for contextual understanding and generation Dynamic query handling with vector-based search using Pinecone Fast response generation using n8n webhook Modular agent memory and role design for scalable enhancement Pre-requisites n8n account with workflow creation access ElevenLabs account with agent and webhook setup OpenAI API key (GPT-4o access) Pinecone account for vector database A list of vectorized tour packages using this n8n embedder (https://creators.n8n.io/workflows/5085) Setup Instructions Step 1: Configure the Voice Agent Webhook in ElevenLabs Use POST method Webhook URL: https://... Breakdown voice input into: Destination Type of tour Number of days Number of passengers Step 2: Set Up the AI Agent Prompt in ElevenLabs Use a conversational style with summaries, clarifying questions, and affirmations. Example Prompt: “You use a natural speech style and periodically summarize... Your goal is to help callers create a personalized tour plan.” Step 3: Select LLM LLM: GPT-4o Mini Memory window: Up to 5 contexts Step 4: Integrate Tools Use Custom Tool: n8n ID: tool_xxxxxx Tool Description: “Generates travel plan once the details are collected” Step 5: Build n8n Workflow Trigger: Webhook (POST) Process user input: Tour Recommendation AI Agent Use OpenAI Chat Model (GPT-4o) for reasoning Query Pinecone Vector Store using Tour Builder Q&A node Respond with structured Itinerary Plan via webhook response How to use: Execute the n8n workflow (the webhook waits for the voice trigger from elevenlabs) Start the Elevenlabs Voice Agent Request for a tour plan to any destination giving the details of your tour preferences. Wait for the Voice Agent to respond back with tour package suggestions after fetching the tour details from the n8n workflow. Close the conversation. | Area | Improvement | | ------------------ | ----------------------------------------------------- | | 🔉 Voice UX | Natural-sounding travel agent using ElevenLabs | | 💡 Personalization | ChatGPT-4o adapts based on travel style & preferences | | 📚 Knowledge Base | Pinecone-powered vector retrieval of real tour data | | 🔁 Reusability | Modular workflow with reusable embedding tools | | ⚙️ System Design | Separation of memory, logic, and data layers | Who is this for? Travel Agencies & DMCs Offer ultra-personalized packages based on customer queries. Let AI do the matching. Tour Package Aggregators Auto-curate and send matching packages from your catalog — no manual searching needed. Content & Marketing Teams Craft customized tour recommendations for email campaigns and newsletters. Tech-enabled Travel Startups Embed this intelligence in your workflows, CRMs, or chatbots to delight customers.
by Ai Lin ⌘
🎯 What It Does: This project lets you talk to Siri (via Apple Shortcuts) and record or query your daily spending. The shortcut sends your message to an n8n Webhook, which uses AI to decide whether it’s for writing or reading finance data, then replies with a human-friendly message — all powered by n8n + AI + Google Sheets. ⸻ 🌐 PART 1: n8n Setup 🧩 1. Create a Webhook Trigger in n8n • Add a node: Webhook • Set HTTP Method: POST • Set Path: siri-finance • Enable “Respond to Webhook” = ✅ 🧠 2. Add AI Agent Node (e.g. OpenAI, Ollama, Gemini) • Use system prompt like: You are a finance assistant. Decide if the user wants to record or read transactions. If it's recording, return a JSON object with date, type, name, amount, and expense/income. If it's reading, return date range and type (Expense/Income). Always reply with a human-friendly summary. • Input: {{ $json.text }} (from webhook) • Output: structured json.output 🧮 3. (Optional) Add Logic to write to DB / Supabase / Google Sheets • Append tool: Adds a new row • Read tool: Queries past data Now your n8n flow is ready! ⸻ 📱 PART 2: iOS Shortcut Setup ⚙️ 1. Create a new Shortcut • Name it: 記帳助理 (or Finance Bot) • Add Action: Ask for Input • Prompt: “請說出你的記帳內容” • Input Type: Text • Add Action: Get Contents of URL • Method: POST • URL: https://your-n8n-domain/webhook/siri-finance • Headers: Content-Type: application/json • Request Body: { "text": "Provided Input" } • Replace "Provided Input" with Magic Variable → Input Result 🔊 2. Show Result • Add Action: Show Result • Content: Get Contents of URL 🗣️ 3. Optional: Add “Speak Text” • If you want Siri to speak it back, add Speak Text after Show Result. ⸻ ✅ Example Usage • You: “Hey Siri, 開支$50 早餐” • Siri: “已記錄支出:項目 早餐,金額 $50,已寫入” Or • You: “查一下我過去7日用了幾多錢” • Siri: “你過去7日總支出為 $7684.64,包括:⋯⋯” ⸻ 📦 Files to Share You can package the following: • .shortcut file export • Sample n8n workflow .json • Optional Supabase schema / Google Sheet template ⸻ 💡 Tips for Newcomers • Keep your Webhook public but protect with token if needed. • Ensure you handle emoji and newline safely for iOS compatibility. • Add logging nodes in n8n to help debug Siri messages. ⸻ 🗣️ Optional Project Name “Siri 記帳助理” / “Finance VoiceBot” A simple voice-first way to manage your daily expenses.
by Fan Luo
Auto-Share YouTube Videos with AI-Generated Posts to Facebook, X and Notify in Discord This n8n template demonstrates how to use a LLM like DeepSeek to generate a post and share to Facebook page and X automatically whenever a new video is published to a YouTube channel. How it works We first define RSS with a polling schedule to pull YouTube videos from a specified channel Prompt AI agent to generate a post with proper url and hash tags based on the video metadata Then automatically create a new post in Facebook and X via their APIs Post a new message in Discord channel via Webhook How to use Simply setup a RSS polling trigger to automatically trigger the workflow Requirements Facebook API setup, see step by step tutorials X v2 API setup, see step by step tutorials Discord channel webhook, see step by step tutorials Need Help? Contact me via My Blog or ask in the Forum! Happy Hacking!
by Guillaume Duvernay
Unlock a new level of sophistication for your AI agents with this template. While the native n8n Think Tool is great for giving an agent an internal monologue, it's limited to one instance. This workflow provides a clever solution using a sub-workflow to create multiple, custom thinking tools, each with its own specific purpose. This template provides the foundation for building agents that can plan, act, and then reflect on their actions before proceeding. Instead of just reacting, your agent can now follow a structured, multi-step reasoning process that you design, leading to more reliable and powerful automations. Who is this for? AI and automation developers:** Anyone looking to build complex, multi-tool agents that require robust logic and planning capabilities. LangChain enthusiasts:** Users familiar with advanced agent concepts like ReAct (Reason-Act) will find this a practical way to implement similar frameworks in n8n. Problem solvers:** If your current agent struggles with complex tasks, giving it distinct steps for planning and reflection can dramatically improve its performance. What problem does this solve? Bypasses the single "Think Tool" limit:** The core of this template is a technique that allows you to add as many distinct thinking steps to your agent as you need. Enables complex reasoning:** You can design a structured thought process for your agent, such as "Plan the entire process," "Execute Step 1," and "Reflect on the result," making it behave more intelligently. Improves agent reliability and debugging:** By forcing the agent to write down its thoughts at different stages, you can easily see its line of reasoning, making it less prone to errors and much easier to debug when things go wrong. Provides a blueprint for sophisticated AI:** This is not just a simple tool; it's a foundational framework for building state-of-the-art AI agents that can handle more nuanced and multi-step tasks. How it works The re-usable "Thinking Space": The magic of this template is a simple sub-workflow that does nothing but receive text. This workflow acts as a reusable "scratchpad." Creating custom thinking tools: In the main workflow, we use the Tool (Workflow) node to call this "scratchpad" sub-workflow multiple times. We give each of these tools a unique name (e.g., Initial thoughts, Additional thoughts). The power of descriptions: The key is the description you give each of these tool nodes. This description tells the agent when and how it should use that specific thinking step. For example, the Initial thoughts tool is described as the place to create a plan at the start of a task. Orchestration via system prompt: The main AI Agent's system prompt acts as the conductor, instructing the agent on the overall process and telling it about its new thinking abilities (e.g., "Always start by using the Initial thoughts tool to make a plan..."). A practical example: This template includes two thinking tools to demonstrate a "Plan and Reflect" cycle, but you can add many more to fit your needs. Setup Add your own "action" tools: This template provides the thinking framework. To make it useful, you need to give the agent something to do. Add your own tools to the AI Agent, such as a web search tool, a database lookup, or an API call. Customize the thinking tools: Edit the description of the existing Initial thoughts and Additional thoughts tools. Make them relevant to the new action tools you've added. For example, "Plan which of the web search or database tools to use." Update the agent's brain: Modify the system prompt in the main AI Agent node. Tell it about the new action tools you've added and how it should use your customized thinking tools to complete its tasks. Connect your AI model: Select the OpenAI Chat Model node and add your credentials. Taking it further Create more granular thinking steps:** Add more thinking tools for different stages of a process, like a "Hypothesize a solution" tool, a "Verify assumptions" tool, or a "Final answer check" tool. Customize the thought process:* You can change *how the agent thinks by editing the prompt inside the fromAI('Thoughts', ...) field within each tool. You could ask for thoughts in a specific format, like bullet points or a JSON object. Change the workflow trigger:** Switch the chat trigger for a Telegram trigger, email, Slack, whatever you need for your use case! Integrate with memory:** For even more power, combine this framework with a long-term memory solution, allowing the agent to reflect on its thoughts from past conversations.
by Immanuel
AI-powered Telegram message analysis with multi-tool notifications (Gmail, Telegram) This workflow triggers on Telegram updates, analyzes messages with an AI Agent using MCP tools, and sends notifications via Gmail and Telegram. Detailed Description Who is this for? This template is for teams, businesses, or individuals using Telegram for communication who need automated, AI-driven insights and notifications. It’s ideal for customer support teams, project managers, or tech enthusiasts wanting to process Telegram messages intelligently and receive alerts via Gmail and Telegram. What problem is this workflow solving? Use case This workflow solves the challenge of manually monitoring Telegram messages by automating message analysis and notifications. For example, a support team can use it to analyze customer queries on Telegram with AI tools (OpenAI, Airbnb, Brave, FireCrawl) and get notified via Gmail and Telegram for quick responses. What this workflow does The workflow: Triggers on a Telegram update (e.g., a new message) using the Listen for Telegram Updates node. Processes the message with the Analyze Message with AI node, an AI Agent using MCP tools like OpenAI Chat, Airbnb search, Brave search, and FireCrawl. Sends notifications via the Send Gmail Notification and Send Telegram Alert nodes, including AI-generated insights. Setup Prerequisites: Telegram bot token for the trigger and notification nodes. Gmail API credentials for sending emails. API keys for OpenAI, Airbnb, Brave, and FireCrawl (used in the AI Agent). Steps: Configure the Listen for Telegram Updates node with your Telegram bot token. Set up the Analyze Message with AI node with your OpenAI API key and other tool credentials. Configure the Send Gmail Notification node with your Gmail credentials. Set up the Send Telegram Alert node with your Telegram bot token. Test by sending a Telegram message to trigger the workflow. Setup takes ~15-30 minutes. Detailed instructions are in sticky notes within the workflow. How to customize this workflow to your needs Add more AI tools (e.g., sentiment analysis) in the Analyze Message with AI node. Modify the notification message in the Send Gmail Notification and Send Telegram Alert nodes to include specific AI outputs. Add nodes for other channels like Slack or SMS after the AI Agent. Disclaimer This workflow uses Community nodes (e.g., Airbnb, Brave, FireCrawl), which are available only in self-hosted n8n instances. Ensure your n8n setup supports Community nodes before using this template.
by Artur
Overview This automated workflow fetches Upwork job postings using Apify, removes duplicate job listings via Airtable, and sends new job opportunities to Slack. Key Features: Automated job retrieval** from Upwork via Apify API Duplicate filtering** using Airtable to store only unique jobs Slack notifications** for new job postings Runs every 30 minutes** during working hours (9 AM - 5 PM) This workflow requires an active Apify subscription to function, as it uses the Apify Upwork API to fetch job listings. Who is This For? This workflow is ideal for: Freelancers looking to track Upwork jobs in real time Recruiters automating job collection for analytics Developers who want to integrate Upwork job data into their applications What Problem Does This Solve? Manually checking Upwork for jobs is time-consuming and inefficient. This workflow: Automates job discovery based on your keywords Filters out duplicate listings, ensuring only new jobs are stored Notifies you on Slack when new jobs appear How the Workflow Works 1. Schedule Trigger (Every 20 Minutes) Triggers the workflow at 20-minute intervals Ensures job searches are only executed during working hours (9 AM - 5 PM) 2. Query Upwork for Jobs Uses Apify API to scrape Upwork job posts for specific keywords (e.g., "n8n", "Python") 3. Find Existing Jobs in Airtable Searches Airtable to check if a job (based on title and link) already exists 4. Filter Out Duplicate Jobs The Merge Node compares Upwork jobs with Airtable data The IF Node filters out jobs that are already stored in the database 5. Save Only New Jobs in Airtable The Insert Node adds only new job listings to the Airtable collection 6. Send a Slack Notification If a new job is found, a Slack message is sent with job details Setup Guide Required API Keys Upwork Scraper (Apify Token) – Get your token from Apify Airtable Credentials Slack API Token – Connect Slack to n8n and set the channel ID (default: #general) Configuration Steps Modify search keywords in the 'Assign Parameters' node (startUrls) Adjust the Working Hours in the 'If Working Hours' node Set your Slack channel in the Slack node Ensure Airtable is connected properly - you'll need to create a table with 'title' and 'link' columns. Adjust the 'If Working Hours' node to match your timezone and hours, or remove it altogether to receive notifications and updates constantly. How to Customize the Workflow Change keywords: update the startUrls in the 'Assign Parameters' node to track different job categories Change 'If Working Hours': Modify conditions in the IF Node to filter times based on your needs Modify Slack Notifications: Adjust the Slack message format to include additional job details Why Use This Workflow? Automated job tracking without manual searches Prevents duplicate entries in Airtable Instant Slack notifications for new job opportunities Customizable – adapt the workflow to different job categories Next Steps Run the workflow and test with a small set of keywords Expand job categories for better coverage Enhance notifications by integrating Telegram, Email, or a dashboard This workflow ensures real-time job tracking, prevents duplicates, and keeps you updated effortlessly.