by Milorad Filipović
How It works It's very important to come prepared to Sales calls. This often means a lot of manual research about the person you're calling with. This workflow delivers the latest news about businesses you are about to interact with each day. Scans Your Calendar**: Each morning, it reviews your Google Calendar for any scheduled meetings or calls with companies. Fetches Latest News**: For each identified company, it searches the web for the most recent and relevant news articles using newsapi.org Delivers Insights**: You receive personalized emails via Gmail, each dedicated to a company you're meeting with that day, containing a curated list of news headlines, brief descriptions, and direct links to full articles. Setup steps The workflow requires you to have the following accounts set up in their respective nodes: Google Calendar GMail Besides those, there are a few parameters in the node called Setup that can be used to tweak the workflow:
by Niklas Hatje
Use case When working with multiple teams, bugs must get in front of the right team as quickly as possible to be resolved. Normally this includes a manual grooming of new bugs that have arrived in your ticketing system (in our case Linear). We found this way too time-consuming. That's why we built this workflow. What this workflow does This workflow triggers every time a Linear issue is created or updated within a certain team. For us at n8n, we created one general team called Engineering where all bugs get added in the beginning. The workflow then checks if the issue meets the criteria to be auto-moved to a certain team. In our case, that means that the description is filled, that it has the bug label, and that it's in the Triage state. The workflow then classifies the bug using OpenAI's GPT-4 model before updating the team property of the Linear issue. If the AI fails to classify a team, the workflow sends an alert to Slack. Setup Add your Linear and OpenAi credentials Change the team in the Linear Trigger to match your needs Customize your teams and their areas of responsibility in the Set me up node. Please use the format Teamname. Also, make sure that the team names match the names in Linear exactly. Change the Slack channel in the Set me up node to your Slack channel of choice. How to adjust it to your needs Play around with the context that you're giving to OpenAI, to make sure the model has enough knowledge about your teams and their areas of responsibility Adjust the handling of AI failures to your needs How to enhance this workflow At n8n we use this workflow in combination with some others. E.g. we have the following things on top: We're using an automation that enables everyone to add new bugs easily with the right data via a /bug command in Slack (check out this template if that's interesting to you) This workflow was built using n8n version 1.30.0
by Nicolas Chourrout
This workflow automatically generates draft replies in Gmail. It's designed for anyone who manages a high volume of emails or often face writer's block when crafting responses. Since it doesn't send the generated message directly, you're still in charge of editing and approving emails before they go out. How It Works: Email Trigger: activates when new emails reach the Gmail inbox Assessment: uses OpenAI gpt-4o and a JSON parser to determine if a response is necessary. Reply Generation: crafts a reply with OpenAI GPT-4 Turbo Draft Integration: after converting the text to html, it places the draft into the Gmail thread as a reply to the first message Set Up Overview (~10 minutes): OAuth Configuration (follow n8n instructions here): Setup Google OAuth in Google Cloud console. Make sure to add Gmail API with the modify scope. Add Google OAuth credentials in n8n. Make sure to add the n8n redirect URI to the Google Cloud Console consent screen settings. OpenAI Configuration: add OpenAI API Key in the credentials Tweaking the prompt: edit the system prompt in the "Generate email reply" node to suit your needs Detailed Walkthrough Check out this blog post where I go into more details on how I built this workflow. Reach out to me here if you need help building automations for your business.
by Mihai Farcas
How it works: The workflow starts by sending a request to a website to retrieve its HTML content. It then parses the HTML extracting the relevant information The extracted data is storted and converted into a CSV file. The CSV file is attached to an email and sent to your specified address. The data is simultaneously saved to both Google Sheets and Microsoft Excel for further analysis or use. Set-up steps: Change the website to scrape in the "Fetch website content" node Configure Microsoft Azure credentials with Microsoft Graph permissions (required for the Save to Microsoft Excel 365 node) Configure Google Cloud credentials with access to Google Drive, Google Sheets and Gmail APIs (the latter is required for the Send CSV via e-mail node).
by Jimleuk
This n8n workflow demonstrates how to automate oftern time-consuming form filling tasks in the early stages of the tendering process; the Request for Proposal document or "RFP". It does this by utilising a company's knowledgebase to generating question-and-answer pairs using Large Language Models. How it works A buyer's RFP is submitted to the workflow as a digital document that can be parsed. Our first AI agent scans and extracts all questions from the document into list form. The supplier sets up an OpenAI assistant prior loaded with company brand, marketing and technical documents. The workflow loops through each of the buyer's questions and poses these to the OpenAI assistant. The assistant's answers are captured until all questions are satisified and are then exported into a new document for review. A sales team member is then able to use this document to respond quickly to the RFP before their competitors. Example Webhook Request curl --location 'https://<n8n_webhook_url>' \ --form 'id="RFP001"' \ --form 'title="BlueChip Travel and StarBus Web Services"' \ --form 'reply_to="jim@example.com"' \ --form 'data=@"k9pnbALxX/RFP Questionnaire.pdf"' Requirements An OpenAI account to use AI services. Customising the workflow OpenAI assistants is only one approach to hosting a company knowledgebase for AI to use. Exploring different solutions such as building your own RAG-powered database can sometimes yield better results in terms of control of how the data is managed and cost.
by Jimleuk
This n8n workflow demonstrates a simple approach to improve chat UX by staggering an AI Agent's reply for users who send in a sequence of partial messages and in short bursts. How it works Twilio webhook receives user's messages which are recorded in a message stack powered by Redis. The execution is immediately paused for 5 seconds and then another check is done against the message stack for the latest message. The purpose of this check lets use know if the user is sending more messages or if they are waiting for a reply. The execution is aborted if the latest message on the stack differs from the incoming message and continues if they are the same. For the latter, the agent receives the buffered messages up to that point and is able to respond to them in a single reply. Requirements A Twilio account and SMS-enabled phone number to receive messages. Redis instance for the messages stack. OpenAI account for the language model. Customising the workflow This workflow should work for other common messaging platforms such as Whatsapp and Telegram. 5 seconds too long or too short? Adjust the wait threshold to suit your customers.
by Jimleuk
This n8n workflow demonstrates how to automate image captioning tasks using Gemini 1.5 Pro - a multimodal LLM which can accept and analyse images. This is a really simple example of how easy it is to build and leverage powerful AI models in your repetitive tasks. How it works For this demo, we'll import a public image from a popular stock photography website, Pexel.com, into our workflow using the HTTP request node. With multimodal LLMs, there is little do preprocess other than ensuring the image dimensions fit within the LLMs accepted limits. Though not essential, we'll resize the image using the Edit image node to achieve fast processing. The image is used as an input to the basic LLM node by defining a "user message" entry with the binary (data) type. The LLM node has the Gemini 1.5 Pro language model attached and we'll prompt it to generate a caption title and text appropriate for the image it sees. Once generated, the generated caption text is positioning over the original image to complete the task. We can calculate the positioning relative to the amount of characters produced using the code node. An example of the combined image and caption can be found here: https://res.cloudinary.com/daglih2g8/image/upload/f_auto,q_auto/v1/n8n-workflows/l5xbb4ze4wyxwwefqmnc Requirements Google Gemini API Key. Access to Google Drive. Customising the workflow Not using Google Gemini? n8n's basic LLM node supports the standard syntax for image content for models that support it - try using GPT4o, Claude or LLava (via Ollama). Google Drive is only used for demonstration purposes. Feel free to swap this out for other triggers such as webhooks to fit your use case.
by Yaron Been
Transform YouTube comments into actionable insights with automated AI analysis and professional email reports. This intelligent workflow monitors your Google Sheets for YouTube video IDs, fetches comments using YouTube API, performs comprehensive AI sentiment analysis, and delivers formatted email reports with viewer insights - helping content creators understand their audience and improve engagement. 🚀 What It Does Smart Video Monitoring: Watches Google Sheets for new YouTube video IDs marked as "Pending" and triggers automated analysis Complete Comment Collection: Fetches up to 100 top comments per video using YouTube API with relevance-based ordering AI-Powered Analysis: Uses GPT-4 to analyze comments for sentiment, themes, questions, feedback, and actionable insights Professional Email Reports: Generates detailed HTML reports with statistics, sentiment breakdown, and improvement recommendations Automated Status Tracking: Updates spreadsheet status to prevent duplicate processing and maintain organized workflow 🎯 Key Benefits ✅ Deep Audience Insights: Understand what viewers really think about your content ✅ Save Hours of Manual Work: Automated comment analysis vs reading hundreds of comments ✅ Improve Content Strategy: Get actionable feedback for better video performance ✅ Track Sentiment Trends: Monitor positive/negative feedback patterns ✅ Professional Reporting: Receive formatted analysis reports via email ✅ Scalable Analysis: Process multiple videos automatically 🏢 Perfect For Content Creators & YouTubers Individual creators tracking audience engagement Educational channels analyzing learning feedback Entertainment creators understanding viewer preferences Business channels monitoring brand sentiment Marketing & Business Applications Brand Monitoring**: Track sentiment on branded content and partnerships Audience Research**: Understand viewer demographics and preferences Content Optimization**: Identify what resonates with your audience Competitor Analysis**: Analyze comments on competitor videos (where allowed) ⚙️ What's Included Complete Analytics Workflow: Ready-to-deploy YouTube comment analysis system Google Sheets Integration: Simple spreadsheet-based video management YouTube API Integration: Automated comment fetching with proper authentication AI Analysis Engine: GPT-4 powered sentiment and insight generation Email Reporting System: Professional HTML-formatted reports Status Management: Automatic processing tracking and duplicate prevention 🔧 Setup Requirements n8n Platform**: Cloud or self-hosted instance YouTube API Credentials**: Google Cloud Console API access OpenAI API**: GPT-4 access for comment analysis Google Sheets**: Video ID management and status tracking Gmail Account**: For receiving analysis reports 📊 Required Google Sheets Structure | ID | Video Title | YouTube Video ID | Status | |----|-------------|------------------|---------| | 1 | My Tutorial | dQw4w9WgXcQ | Pending | | 2 | Product Demo| abc123def456 | Mail Sent | | 3 | Weekly Vlog | xyz789uvw012 | Draft | Status Options: Draft → Pending → Mail Sent 📧 Sample Analysis Report 📺 YouTube Comments Analysis Report Video: "How to Build Your First Website" 📊 Quick Statistics: • Total Comments Analyzed: 87 • Average Likes per Comment: 3.2 • Total Replies: 156 • Sentiment Summary: Positive: 65%, Negative: 10%, Neutral: 25% ❓ Common Questions: • "What hosting service do you recommend?" • "Can I do this without coding experience?" • "How much does domain registration cost?" 💡 Key Feedback Points: • Tutorial pace is perfect for beginners • More examples of finished websites requested • Viewers want follow-up video on advanced features 🎯 Actionable Insights: • Create hosting comparison video • Add timestamps for different skill levels • Consider beginner-friendly series expansion 🎨 Customization Options Analysis Depth: Adjust AI prompts for different analysis focuses (engagement, education, entertainment) Comment Limits: Modify maximum comments processed (default: 100, AI analysis: 50) Report Recipients: Send reports to multiple team members or clients Custom Metrics: Add specific analysis criteria for your content niche Multi-Channel: Process videos from multiple YouTube channels Scheduling: Set up regular analysis of your latest videos 🏷️ Tags & Categories #youtube-analytics #comment-analysis #content-creator-tools #ai-sentiment-analysis #video-insights #audience-research #youtube-api #content-optimization #social-media-analytics #creator-economy #video-marketing #engagement-analysis #content-strategy #ai-reporting #youtube-automation 💡 Use Case Examples Educational Channel: Analyze tutorial comments to identify confusing concepts and improve teaching methods Product Reviews: Monitor sentiment on review videos to understand customer satisfaction trends Entertainment Creator: Track audience reactions to different content formats and optimize future videos
by Jimleuk
This n8n workflow demonstrates an approach to parsing bank statement PDFs with multimodal LLMs as an alternative to traditional OCR. This allows for much more accurate data extraction from the document especially when it comes to tables and complex layouts. Multimodal Parsing is better than traditiona OCR because: It reduces complexity and overhead by avoiding the need to preprocess the document into text format such as markdown before passing to the LLM. It handles non-standard PDF formats which may produce garbled output via traditional OCR text conversion. It's orders of magnitude cheaper than premium OCR models that still require post-processing cleanup and formatting. LLMs can format to any schema or language you desire! How it works You can use the example bank statement created specifically for this workflow here: https://drive.google.com/file/d/1wS9U7MQDthj57CvEcqG_Llkr-ek6RqGA/view?usp=sharing A PDF bank statement is imported via Google Drive. For this demo, I've created a mock bank statement which includes complex table layouts of 5 columns. Typically, OCR will be unable to align the columns correctly and mistake some deposits for withdrawals. Because multimodal LLMs do not accept PDFs directly, well have to convert the PDF to a series of images. We can achieve this by using a tool such as Stirling PDF. Stirling PDF is self-hostable which is handy for sensitive data such as bank statements. Stirling PDF will return our PDF as a series of JPGs (one for each page) in a zipped file. We can use n8n's decompress node to extract the images and ensure they are ordered by using the Sort node. Next, we'll resize each page using the Edit Image node to ensure the right balance between resolution limits and processing speed. Each resized page image is then passed into the Basic LLM node which will use our multimodal LLM of choice - Gemini 1.5 Pro. In the LLM node's options, we'll add a "user message" of type binary (data) which is how we add our image data as an input. Our prompt will instruct the multimodal LLM to transcribe each page to markdown. Note, you do not need to do this - you can just ask for data points to extract directly! Our goal for this template is to demonstrate the LLMs ability to accurately read the page. Finally, with our markdown version of all pages, we can pass this to another LLM node to extract required data such as deposit line items. Requirements Google Gemini API for Multimodal LLM. Google Drive access for document storage. Stirling PDF instance for PDF to Image conversion Customising the workflow At time of writing, Gemini 1.5 Pro is the most accurate in text document parsing with a relatively low cost. If you are not using Google Gemini however you can switch to other multimodal LLMs such as OpenAI GPT or Antrophic Claude. If you don't need the markdown, simply asking what to extract directly in the LLM's prompt is also acceptable and would save a few extra steps. Not parsing any bank statements any time soon? This template also works for Invoices, inventory lists, contracts, legal documents etc.
by Agent Studio
Overview This workflow aims to provide data visualization capabilities to a native SQL Agent. Together, they can help foster data analysis and data visualization within a team. It uses the native SQL Agent that works well and adds visualization capabilities thanks to OpenAI’s Structured Output and Quickchart.io. How it works Information Extraction: The Information Extractor identifies and extracts the user's question. If the question includes a visualization aspect, the SQL Agent alone may not respond accurately. SQL Querying: It leverages a regular SQL Agent: it connects to a database, queries it, and translates the response into a human-readable format. Chart Decision: The Text Classifier determines whether the user would benefit from a chart to support the SQL Agent's response. Chart Generation: If a chart is needed, the sub-workflow dynamically generates a chart and appends it to the SQL Agent’s response. If not, the SQL Agent’s response is output as is. Calling OpenAI for Chart Definition: The sub-workflow calls OpenAI via the HTTP Request node to retrieve a chart definition. Building and Returning the Chart: In the "Set Response" node, the chart definition is appended to a Quickchart.io URL, generating the final chart image. The AI Agent returns the response along with the chart. How to use it Use an existing database or create a new one. For example, I've used this Kaggle dataset and uploaded it to a Supabase DB. Add the PostgreSQL or MySQL credentials. Alternatively, you can use SQLite binary files (check this template). Activate the workflow. Start chatting with the AI SQL Agent. If the Text Classifier determines a chart would be useful, it will generate one in addition to the SQL Agent's response. Notes The full Quickchart.io specifications have not been fully integrated, so there may be some glitches (e.g., radar graphs may not display properly due to size limitations).
by Jimleuk
This n8n template builds upon a simple appointment request form design which uses AI to qualify if the incoming enquiry is suitable and/or time-worthy of an appointment. This demonstrates a lighter approach to using AI in your templates but handles a technically difficult problem - contextual understanding! This example can be used in a variety of contexts where figuring out what is and isn't relevant can save a lot of time for your organisation. How it works We start with a form trigger which asks for the purpose of the appointment. Instantly, we can qualify this by using a text classifier node which uses AI's contextual understanding to ensure the appointment is worthwhile. If not, an alternative is suggested instead. Multi-page forms are then used to set the terms of the appointment and ask the user for a desired date and time. An acknowledgement is sent to the user while an approval by email process is triggered in the background. In a subworkflow, we use Gmail with the wait for approval operation to send an approval form to the admin user who can either confirm or decline the appointment request. When approved, a Google Calendar event is created. When declined, the user is notified via email that the appointment request was declined. How to use Modify the enquiry classifier to determine which contexts are relevant to you. Configure the wait for approval node to send to an email address which is accessible to all appropriate team members. Requirements OpenAI for LLM Gmail for Email Google Calendar for Appointments Customising this workflow Not using Google Mail or Calendar? Feel free to swap this with other services. The wait for approval step is optional. Remove if you wish to handle appointment request resolution in another way.
by ibrhdotme
This is a simple workflow that grabs HackerNews front-page headlines from today's date across every year since 2007 and uses a little AI magic (Google Gemini) to sort 'em into themes, sends a neat Markdown summary on Telegram. How it works Runs daily, grabs Hacker News front page for this day across every year since 2007. Pulls headlines & dates. Uses Google Gemini to sort headlines into topics & spot trends. Sends a Markdown summary to Telegram. Set up steps Clone the workflow. Add your Google Gemini API key. Add your Telegram bot token and chat ID. **Built on Day-01 as part of the #100DaysOfAgenticAi Fork it, tweak it, have fun!**