by Yosua Surojo
Who it's for This workflow is for anyone who wants to build an automated, AI-enhanced reading list. Ideal for: Knowledge workers and researchers who collect and organize articles Students managing study materials Productivity hackers who use Telegram and Notion for personal knowledge management Anyone using the AI-Enhanced Knowledge Base Tracker Notion Template How it works This workflow takes any article link sent to your Telegram bot and automatically: Parses the article into a clean title and body Uses OpenAI to generate a 1–2 sentence highlight and topic tag Saves it into your Notion database Sends a confirmation message with the highlight and Notion link back to Telegram Main steps: Telegram Trigger - Listens for incoming message containing an article link. Fetch Article Title & Content - Calls the article-parser-api deployed on Vercel to fetch and parse the article content into structured JSON (title and content). Generate Highlight + Tag (AI Agent) - Processes the parsed content to generate Highlight and Type tag values. Structured Metadata for Notion - Adjusts the extracted data before saving it to Notion. Save Article to Notion Database - Inserts the article and generated metadata into your Notion knowledge base. Confirm Save via Telegram - Sends a confirmation message and the Notion page link back to the Telegram bot chat after the entry is created. Setup Create and connect your API credentials: Telegram Bot OpenAI API Key Notion Integration Deploy the article parser: Use this repo: article-parser-api Deploy it to Vercel or any serverless environment Link your Notion database: Duplicate the AI‑Enhanced Knowledge Base Tracker Copy the database URL and connect it in the Notion node Test your workflow: Click Execute workflow Send an article link to your Telegram bot Once verified, activate the workflow so it runs automatically Requirements Telegram bot token OpenAI API key Notion integration and shared database A deployed article parser (e.g., article-parser-api) Optional customization Edit the AI Agent prompt to change tone or tagging style Add filtering or additional fields in the Edit Fields node Trigger from other sources (e.g., Slack or Email)
by Milorad Filipović
How It works It's very important to come prepared to Sales calls. This often means a lot of manual research about the person you're calling with. This workflow delivers the latest news about businesses you are about to interact with each day. Scans Your Calendar**: Each morning, it reviews your Google Calendar for any scheduled meetings or calls with companies. Fetches Latest News**: For each identified company, it searches the web for the most recent and relevant news articles using newsapi.org Delivers Insights**: You receive personalized emails via Gmail, each dedicated to a company you're meeting with that day, containing a curated list of news headlines, brief descriptions, and direct links to full articles. Setup steps The workflow requires you to have the following accounts set up in their respective nodes: Google Calendar GMail Besides those, there are a few parameters in the node called Setup that can be used to tweak the workflow:
by Daniel Nolde
What it is Chat with your event schedule from Google Sheets in Telegram: "When is the next meetup?" "How many events are there next month?" "Who presented most often?" "Which future meetups have no presenters yet?" This workflow lets you chat with a telegram bot about past, present and future events that are scheduled in a Google Spreadsheet. (Info: This proof-of-concept was created as a demo for a hackathon of an AI & Developer Meetup in Da Nang (Vietnam) that uses a telegram group to organize) Who it is for If you want an easy way for your audience to get information about your events, you can us this workflow for the same purpose, or easily adapt it to your needs and different use-cases where you want to query smaller amounts of tabular data in natural language. How it works Upon getting triggered by a chat message to a telegram bot, the schedule of meetups is retrieved from Google Spreadsheets, converted into a markdown table syntax and fed into the system prompt of an LLM (we're using OpenRouter in this example), whose output is posted back as answer into the same telegram chat. Setup steps TO REVIEWING IN ACTION As the reviewer of this workflow, you can temporarily use it via an existing telegram bot, simply point your telegram client to https://t.me/AiDaNangBot and start to ask questions like: "When is the next meetup?" "What future meetings do not have presenters?" "Who presented on Future of Human Relationships?" To build upon this workflow: Import the workflow Customize the Google Docs credentials for your individual access Create a telegram bot and connect it to the workflow by entering its API token into the credentials used in the telegram trigger node In the "Settings" node, replace the "scheduleURL" with the URL of your own copy of the Google Spreadsheet or a copy of the Event Schedule Template Sheet to spin off your own – whereby the structure of the spreadsheet doesn't matter, it's just important that you semantically structure your information in dedicated columns clearly labeled in the header row.
by Niklas Hatje
Use case When working with multiple teams, bugs must get in front of the right team as quickly as possible to be resolved. Normally this includes a manual grooming of new bugs that have arrived in your ticketing system (in our case Linear). We found this way too time-consuming. That's why we built this workflow. What this workflow does This workflow triggers every time a Linear issue is created or updated within a certain team. For us at n8n, we created one general team called Engineering where all bugs get added in the beginning. The workflow then checks if the issue meets the criteria to be auto-moved to a certain team. In our case, that means that the description is filled, that it has the bug label, and that it's in the Triage state. The workflow then classifies the bug using OpenAI's GPT-4 model before updating the team property of the Linear issue. If the AI fails to classify a team, the workflow sends an alert to Slack. Setup Add your Linear and OpenAi credentials Change the team in the Linear Trigger to match your needs Customize your teams and their areas of responsibility in the Set me up node. Please use the format Teamname. Also, make sure that the team names match the names in Linear exactly. Change the Slack channel in the Set me up node to your Slack channel of choice. How to adjust it to your needs Play around with the context that you're giving to OpenAI, to make sure the model has enough knowledge about your teams and their areas of responsibility Adjust the handling of AI failures to your needs How to enhance this workflow At n8n we use this workflow in combination with some others. E.g. we have the following things on top: We're using an automation that enables everyone to add new bugs easily with the right data via a /bug command in Slack (check out this template if that's interesting to you) This workflow was built using n8n version 1.30.0
by Jimleuk
This n8n workflow demonstrates how to automate oftern time-consuming form filling tasks in the early stages of the tendering process; the Request for Proposal document or "RFP". It does this by utilising a company's knowledgebase to generating question-and-answer pairs using Large Language Models. How it works A buyer's RFP is submitted to the workflow as a digital document that can be parsed. Our first AI agent scans and extracts all questions from the document into list form. The supplier sets up an OpenAI assistant prior loaded with company brand, marketing and technical documents. The workflow loops through each of the buyer's questions and poses these to the OpenAI assistant. The assistant's answers are captured until all questions are satisified and are then exported into a new document for review. A sales team member is then able to use this document to respond quickly to the RFP before their competitors. Example Webhook Request curl --location 'https://<n8n_webhook_url>' \ --form 'id="RFP001"' \ --form 'title="BlueChip Travel and StarBus Web Services"' \ --form 'reply_to="jim@example.com"' \ --form 'data=@"k9pnbALxX/RFP Questionnaire.pdf"' Requirements An OpenAI account to use AI services. Customising the workflow OpenAI assistants is only one approach to hosting a company knowledgebase for AI to use. Exploring different solutions such as building your own RAG-powered database can sometimes yield better results in terms of control of how the data is managed and cost.
by Jimleuk
This n8n workflow demonstrates how to automate image captioning tasks using Gemini 1.5 Pro - a multimodal LLM which can accept and analyse images. This is a really simple example of how easy it is to build and leverage powerful AI models in your repetitive tasks. How it works For this demo, we'll import a public image from a popular stock photography website, Pexel.com, into our workflow using the HTTP request node. With multimodal LLMs, there is little do preprocess other than ensuring the image dimensions fit within the LLMs accepted limits. Though not essential, we'll resize the image using the Edit image node to achieve fast processing. The image is used as an input to the basic LLM node by defining a "user message" entry with the binary (data) type. The LLM node has the Gemini 1.5 Pro language model attached and we'll prompt it to generate a caption title and text appropriate for the image it sees. Once generated, the generated caption text is positioning over the original image to complete the task. We can calculate the positioning relative to the amount of characters produced using the code node. An example of the combined image and caption can be found here: https://res.cloudinary.com/daglih2g8/image/upload/f_auto,q_auto/v1/n8n-workflows/l5xbb4ze4wyxwwefqmnc Requirements Google Gemini API Key. Access to Google Drive. Customising the workflow Not using Google Gemini? n8n's basic LLM node supports the standard syntax for image content for models that support it - try using GPT4o, Claude or LLava (via Ollama). Google Drive is only used for demonstration purposes. Feel free to swap this out for other triggers such as webhooks to fit your use case.
by Yaron Been
Transform YouTube comments into actionable insights with automated AI analysis and professional email reports. This intelligent workflow monitors your Google Sheets for YouTube video IDs, fetches comments using YouTube API, performs comprehensive AI sentiment analysis, and delivers formatted email reports with viewer insights - helping content creators understand their audience and improve engagement. 🚀 What It Does Smart Video Monitoring: Watches Google Sheets for new YouTube video IDs marked as "Pending" and triggers automated analysis Complete Comment Collection: Fetches up to 100 top comments per video using YouTube API with relevance-based ordering AI-Powered Analysis: Uses GPT-4 to analyze comments for sentiment, themes, questions, feedback, and actionable insights Professional Email Reports: Generates detailed HTML reports with statistics, sentiment breakdown, and improvement recommendations Automated Status Tracking: Updates spreadsheet status to prevent duplicate processing and maintain organized workflow 🎯 Key Benefits ✅ Deep Audience Insights: Understand what viewers really think about your content ✅ Save Hours of Manual Work: Automated comment analysis vs reading hundreds of comments ✅ Improve Content Strategy: Get actionable feedback for better video performance ✅ Track Sentiment Trends: Monitor positive/negative feedback patterns ✅ Professional Reporting: Receive formatted analysis reports via email ✅ Scalable Analysis: Process multiple videos automatically 🏢 Perfect For Content Creators & YouTubers Individual creators tracking audience engagement Educational channels analyzing learning feedback Entertainment creators understanding viewer preferences Business channels monitoring brand sentiment Marketing & Business Applications Brand Monitoring**: Track sentiment on branded content and partnerships Audience Research**: Understand viewer demographics and preferences Content Optimization**: Identify what resonates with your audience Competitor Analysis**: Analyze comments on competitor videos (where allowed) ⚙️ What's Included Complete Analytics Workflow: Ready-to-deploy YouTube comment analysis system Google Sheets Integration: Simple spreadsheet-based video management YouTube API Integration: Automated comment fetching with proper authentication AI Analysis Engine: GPT-4 powered sentiment and insight generation Email Reporting System: Professional HTML-formatted reports Status Management: Automatic processing tracking and duplicate prevention 🔧 Setup Requirements n8n Platform**: Cloud or self-hosted instance YouTube API Credentials**: Google Cloud Console API access OpenAI API**: GPT-4 access for comment analysis Google Sheets**: Video ID management and status tracking Gmail Account**: For receiving analysis reports 📊 Required Google Sheets Structure | ID | Video Title | YouTube Video ID | Status | |----|-------------|------------------|---------| | 1 | My Tutorial | dQw4w9WgXcQ | Pending | | 2 | Product Demo| abc123def456 | Mail Sent | | 3 | Weekly Vlog | xyz789uvw012 | Draft | Status Options: Draft → Pending → Mail Sent 📧 Sample Analysis Report 📺 YouTube Comments Analysis Report Video: "How to Build Your First Website" 📊 Quick Statistics: • Total Comments Analyzed: 87 • Average Likes per Comment: 3.2 • Total Replies: 156 • Sentiment Summary: Positive: 65%, Negative: 10%, Neutral: 25% ❓ Common Questions: • "What hosting service do you recommend?" • "Can I do this without coding experience?" • "How much does domain registration cost?" 💡 Key Feedback Points: • Tutorial pace is perfect for beginners • More examples of finished websites requested • Viewers want follow-up video on advanced features 🎯 Actionable Insights: • Create hosting comparison video • Add timestamps for different skill levels • Consider beginner-friendly series expansion 🎨 Customization Options Analysis Depth: Adjust AI prompts for different analysis focuses (engagement, education, entertainment) Comment Limits: Modify maximum comments processed (default: 100, AI analysis: 50) Report Recipients: Send reports to multiple team members or clients Custom Metrics: Add specific analysis criteria for your content niche Multi-Channel: Process videos from multiple YouTube channels Scheduling: Set up regular analysis of your latest videos 🏷️ Tags & Categories #youtube-analytics #comment-analysis #content-creator-tools #ai-sentiment-analysis #video-insights #audience-research #youtube-api #content-optimization #social-media-analytics #creator-economy #video-marketing #engagement-analysis #content-strategy #ai-reporting #youtube-automation 💡 Use Case Examples Educational Channel: Analyze tutorial comments to identify confusing concepts and improve teaching methods Product Reviews: Monitor sentiment on review videos to understand customer satisfaction trends Entertainment Creator: Track audience reactions to different content formats and optimize future videos
by Agent Studio
Overview This workflow aims to provide data visualization capabilities to a native SQL Agent. Together, they can help foster data analysis and data visualization within a team. It uses the native SQL Agent that works well and adds visualization capabilities thanks to OpenAI’s Structured Output and Quickchart.io. How it works Information Extraction: The Information Extractor identifies and extracts the user's question. If the question includes a visualization aspect, the SQL Agent alone may not respond accurately. SQL Querying: It leverages a regular SQL Agent: it connects to a database, queries it, and translates the response into a human-readable format. Chart Decision: The Text Classifier determines whether the user would benefit from a chart to support the SQL Agent's response. Chart Generation: If a chart is needed, the sub-workflow dynamically generates a chart and appends it to the SQL Agent’s response. If not, the SQL Agent’s response is output as is. Calling OpenAI for Chart Definition: The sub-workflow calls OpenAI via the HTTP Request node to retrieve a chart definition. Building and Returning the Chart: In the "Set Response" node, the chart definition is appended to a Quickchart.io URL, generating the final chart image. The AI Agent returns the response along with the chart. How to use it Use an existing database or create a new one. For example, I've used this Kaggle dataset and uploaded it to a Supabase DB. Add the PostgreSQL or MySQL credentials. Alternatively, you can use SQLite binary files (check this template). Activate the workflow. Start chatting with the AI SQL Agent. If the Text Classifier determines a chart would be useful, it will generate one in addition to the SQL Agent's response. Notes The full Quickchart.io specifications have not been fully integrated, so there may be some glitches (e.g., radar graphs may not display properly due to size limitations).
by Jimleuk
This n8n template builds upon a simple appointment request form design which uses AI to qualify if the incoming enquiry is suitable and/or time-worthy of an appointment. This demonstrates a lighter approach to using AI in your templates but handles a technically difficult problem - contextual understanding! This example can be used in a variety of contexts where figuring out what is and isn't relevant can save a lot of time for your organisation. How it works We start with a form trigger which asks for the purpose of the appointment. Instantly, we can qualify this by using a text classifier node which uses AI's contextual understanding to ensure the appointment is worthwhile. If not, an alternative is suggested instead. Multi-page forms are then used to set the terms of the appointment and ask the user for a desired date and time. An acknowledgement is sent to the user while an approval by email process is triggered in the background. In a subworkflow, we use Gmail with the wait for approval operation to send an approval form to the admin user who can either confirm or decline the appointment request. When approved, a Google Calendar event is created. When declined, the user is notified via email that the appointment request was declined. How to use Modify the enquiry classifier to determine which contexts are relevant to you. Configure the wait for approval node to send to an email address which is accessible to all appropriate team members. Requirements OpenAI for LLM Gmail for Email Google Calendar for Appointments Customising this workflow Not using Google Mail or Calendar? Feel free to swap this with other services. The wait for approval step is optional. Remove if you wish to handle appointment request resolution in another way.
by Franz
🧠 Sentiment Analyzer Google Sheets → OpenAI GPT-4o → QuickChart → Gmail 🚀 What this workflow does Fetches customer reviews from a Google Sheet. Classifies each review as Positive, Neutral or Negative with GPT-4o-mini. Writes the sentiment back to your sheet. Builds a doughnut chart summarising the totals. Emails the chart to your chosen recipient so the whole team stays in the loop. Perfect for support teams, product managers or anyone who wants a zero-code mood ring for their users’ feedback. 🗺️ Node-by-node tour | 🔩 Node | 💡 Purpose | | ------------------------------------------------------- | ---------------------------------------------------------- | | Manual Trigger | Lets you test the workflow on demand. | | Select Google Sheet | Points to the spreadsheet that holds your reviews. | | Loop Over Items | Feeds each row through the analysis routine. | | Sentiment Analysis (LangChain) | Calls GPT-4o-mini and returns only the sentiment category. | | Update Google Sheet | Writes the new Sentiment value into column C. | | Read Data from Google Sheet | Pulls the full sheet again to create a summary. | | Extract Number of Answers per Sentiment (Code node) | Tallies up how many reviews fall into each category. | | Generate QuickChart | Creates a doughnut (or pie) chart as a PNG. | | Send Gmail with Sentiment Chart | Fires the chart off to your inbox. | | (Sticky Notes) | Friendly setup tips scattered around the canvas. | 🛠️ Setup checklist | ✅ Step | Where | | ------------------------------------------------------------------------------------- | -------------------------------- | | Connect Google Sheets → paste your Spreadsheet ID & choose the correct sheet. | All Google Sheets nodes | | Add OpenAI credentials (sk-… key). | Sentiment Analysis node | | Configure Gmail OAuth2 + recipient address. | Gmail node | | Match your sheet columns → “Review title”, “Review text”, empty “Sentiment”. | Google Sheet itself | | (Optional) Switch to gpt-4o for maximum accuracy. | Sentiment Analysis “Model” param | 🏃♂️ How to run Drop a few sample reviews into the sheet. Click “Test workflow” on the Manual Trigger. Watch each row march through → sentiment appears in column C. After all rows finish, check your inbox for a fresh chart. ✔️ ✨ Ideas for next level Schedule** the trigger (Cron) to auto-process new reviews daily. Feed the counts to Slack or Discord instead of email. Add a second GPT call to generate a short summary for each review. Happy automating! 🎉
by Paul
AI Database Assistant with Smart Query's & PostgreSQL Integration Description: 🚀 Transform Your Database into an Intelligent AI Assistant This workflow creates a smart database assistant that safely handles natural language queries without crashing your system. Features dual-agent architecture with built-in query limits and PostgreSQL optimization – perfect for commercial applications! ✅ Ideal for: SaaS developers building database search features 🔍 Database administrators providing safe AI access 🛡️ Business teams needing user-friendly data queries 📊 Anyone wanting ChatGPT-like database interaction 🤖 🔧 How It Works 1️⃣ User asks a question – "Show me top 10 popular products" 2️⃣ Main AI Agent – Interprets the request and ensures safety limits 3️⃣ SQL Sub-Agent – Generates precise PostgreSQL queries 4️⃣ Database executes – Returns formatted, limited results safely ⚡ Setup Instructions 1️⃣ Prepare Your Database Ensure PostgreSQL is accessible from n8n Note your table structure and column names Set up database connection credentials 2️⃣ Customize the Templates Replace [YOUR_TABLE_NAME] with your actual table name Update [YOUR_FIELDS] with your column names Modify examples to match your use case Important**: Keep all LIMIT clauses intact! 3️⃣ Configure the Agents Copy Main Agent system message to your primary AI node Copy Sub-Agent system message to your SQL generator node Connect the sub-workflow between both agents 4️⃣ Test & Deploy Test with sample queries like "Show me 5 recent items" Verify query limits work (max 50 results) Deploy and monitor performance 🎯 Why Use This Workflow? ✔️ System Protection – Built-in limits prevent crashes from large queries ✔️ Natural Language – Users ask questions in plain English ✔️ Commercial Ready – Generic templates work with any database ✔️ Dual-Agent Safety – Smart interpretation + precise SQL generation ✔️ PostgreSQL Optimized – Handles complex schemas and data types 🚨 Critical Features Query Limits**: Default 10, maximum 50 results (can be modified) Error Prevention**: No unlimited data retrieval Smart Routing**: Natural language → Safe SQL → Formatted results Customizable**: Works with any PostgreSQL database schema 🔗 Start building your AI database assistant today – safe, smart, and scalable!
by Yulia
This workflow shows how to use a self-hosted Large Language Model (LLM) with n8n's LangChain integration to extract personal information from user input. This is particularly useful for enterprise environments where data privacy is crucial, as it allows sensitive information to be processed locally. 📖 For a detailed explanation and more insights on using open-source LLMs with n8n, take a look at our comprehensive guide on open-source LLMs. 🔑 Key Features Local LLM Connect Ollama to run Mistral NeMo LLM locally Provide a foundation for compliant data processing, keeping sensitive information on-premises Data extraction Convert unstructured text to a consistent JSON format Adjust the JSON schema to meet your specific data extraction needs. Error handling Implement auto-fixing for LLM outputs Include error output for further processing ⚙️ Setup and сonfiguration Prerequisites n8n AI Starter Kit installed Configuration steps Add the Basic LLM Chain node with system prompts. Set up the Ollama Chat Model with optimized parameters. Define the JSON schema in the Structured Output Parser node. 🔍 Further resources Run LLMs locally with n8n Video tutorial on using local AI with n8n Apply the power of self-hosted LLMs in your n8n workflows while maintaining control over your data processing pipeline!