by Adrian Bent
This workflow takes two inputs, YouTube video URL (required) and a description of what information to extract from the video. If the description/"what you want" field is left empty, the default prompt will generate a detailed summary and description of the video's contents. However, you can ask for something more specific using this field/input. ++ Don't forget to make the workflow Active and use the production URL from the form node. Benefits Instant Summary Generation - Convert hours of watching YouTube videos to familiar, structured paragraphs and sentences in less than a minute Live Integration - Generate a summary or extract information on the contents of a YouTube video whenever, wherever Virtually Complete Automation - All that needs to be done is to add the video URL and describe what you want to know from the video Presentation - You can ask for a specific structure or tone to better help you understand or study the contents of the video How It Works Smart Form Interface: Simple N8N form captures video URL and description of what's to be extracted Designed for rapid and repeated completion anywhere and anytime Description Check: Uses JavaScript to determine if the description was filled in or left empty If the description field was left empty, the default prompt is, "Please be as descriptive as possible about the contents being spoken of in this video after giving a detailed summary." If the description field is filled, then the filled input will be used to describe what information to extract from the video HTTP Request: We're using Gemini API, specifically the video understanding endpoint We make a post HTTP request passing the video URL and the description of what information to extract Setup Instructions: HTTP Request Setup: Sign up for a Google Cloud account, join the Developer Program and get your Gemini API key Get curl for Gemini Video Understanding API The video understanding relies on the inputs from the form, code and HTTP request node, so correct mapping is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my Gmail: terflix45@gmail.com, and I'll get back to you as soon as I can. Setup Steps: Code Node Setup: The code node is used as a filter to ensure a description prompt is always passed on. Use the JavaScript code below for that effect: // Loop over input items and add a new field called 'myNewField' to the JSON of each one for (const item of $input.all()) { item.json.myNewField = 1; if ($input.first().json['What u want?'].trim() == "") { $input.first().json['What do you want?'] = "Please be as descriptive as possible about the contents being spoken of this video after giving a detailed summary"; } } return $input.all(); // End of Code HTTP Request: To use Gemini Video Understanding, you'll need your Gemini API key Go to https://ai.google.dev/gemini-api/docs/video-understanding#youtube. This link will take you directly to the snippet. Just select REST programming language, copy that curl command, then paste it into the HTTP Request node Replace "Please summarize the video in 3 sentences." with the code node's output, which should either be the default description or the one entered by the user (second output field variable) Replace "https://www.youtube.com/watch?v=9hE5-98ZeCg" with the n8n form node's first output field, which should be the YouTube video URL variable Replace $GEMINI_API_KEY with your API key Redirect: Use n8n form node, page type "Final Ending" to redirect user to the initial n8n form for another analysis or preferred destination
by M Sayed
Stop guessing currency rates! Get a quick and clean exchange rate summary sent right to your phone. 📲 This workflow automatically checks the latest rates and builds a simple report for you. What it does: 🤑 Fetches the very latest exchange rates from an API. 🌍 Shows you what major currencies (like USD & EUR) are worth in your chosen local currency. ✍️ Creates a simple, easy-to-read report. 🚀 Delivers it straight to your Telegram! Setup is easy: All you need to do is set your base currency (e.g., 'EGP') and your Telegram Chat ID. Done! ✅
by Anurag
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This workflow automates document processing and structured table extraction using the Nanonets API. You can submit a PDF file via an n8n form trigger or webhook—the workflow then forwards the document to Nanonets, waits for asynchronous parsing to finish, retrieves the results (including header fields and line items/tables), and returns the output as an Excel file. Ideal for automating invoice, receipt, or order data extraction with downstream business use. How It Works A document is uploaded (via n8n form or webhook). The PDF is sent to the Nanonets Workflow API for parsing. The workflow waits until processing is complete. Parsed results are fetched. Both top-level fields and any table rows/line items are extracted and restructured. Data is exported to Excel format and delivered to the requester. Setup Steps Nanonets Account: Register for a Nanonets account and set up a workflow for your specific document type (e.g., invoice, receipt). Credentials in n8n: Add HTTP Basic Auth credentials in n8n for the Nanonets API (never store credentials directly in node parameters). Webhook/Form Configuration: Option 1: Configure and enable the included n8n Form Trigger node for document uploads. Option 2: Use the included Webhook node to accept external POSTs with a PDF file. Adjust Workflow: Update any HTTP nodes to use your credential profile. Insert your Nanonets Workflow ID in all relevant nodes. Test the Workflow: Enable the workflow and try with a sample document. Features Accepts documents via n8n Form Trigger or direct webhook POST. Securely sends files to Nanonets for document parsing (credentials stored in n8n credentials manager). Automatically waits for async processing, checking Nanonets until results are ready. Extracts both header data and all table/line items into a tabular format. Exports results as an Excel file download. Modular nodes allow easy customization or extension. Prerequisites Nanonets account** with workflow configured for your document type. n8n** instance with HTTP Request, Webhook/Form, Code, and Excel/Spreadsheet nodes enabled. Valid HTTP Basic Auth credentials** saved in n8n for API access. Example Use Cases | Scenario | Benefit | |-----------------------|--------------------------------------------------| | Invoice Processing | Automated extraction of line items and totals | | Receipt Digitization | Parse amounts and charges for expense reports | | Purchase Orders | Convert scanned POs into structured Excel sheets | Notes You must set up credentials in the n8n credentials manager—do not store API keys directly in nodes. All configuration and endpoints are clearly explained with inline sticky notes in the workflow editor. Easily adaptable for other document types or similar APIs—just modify endpoints and result mapping.
by Yaron Been
Automated monitoring system that sends instant alerts when target companies make technology changes, delivered directly to your inbox or Slack. 🚀 What It Does Monitors technology stack changes Sends real-time email alerts Posts updates to Slack Tracks historical changes Filters by technology type 🎯 Perfect For Sales teams IT departments Competitive intelligence Technology vendors Market researchers ⚙️ Key Benefits ✅ Instant technology change alerts ✅ Multiple notification channels ✅ Historical tracking ✅ Customizable filters ✅ Team collaboration 🔧 What You Need BuiltWith API access Email service (SMTP/SendGrid) Slack workspace (optional) n8n instance 📊 Alerts Include Company name Technology changes Timestamp Impact assessment Direct links 🛠️ Setup & Support Quick Setup Get alerts in 15 minutes with our step-by-step guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Stay informed about technology changes that matter to your business with automated monitoring alerts and notifications.
by n8n Team
This workflow syncs Shopify customers to your HubSpot account as contacts. Whenever somebody makes a purchase on Shopify, it automatically adds them as a new customer to your Hubspot account if the customer doesn’t exist yet. Also, this workflow creates or updates contacts from new paid orders on Shopify by adding the amount and order close date of the deal. Prerequisites Shopify account and Shopify credentials HubSpot account and HubSpot credentials How it works Shopify trigger starts the workflow whenever an order is updated. HubSpot node creates or updates the contact who made the order update. Set node sorts and passes only the userid. Merge node merges data of both inputs, the order and the customer. Hubspot node looks up if the order already exists. If node splits the workflow conditionally, based on data received. If the order is new, the new deal is created in the Hubspot node.
by Dhruv Dalsaniya
Description: This n8n workflow automates a Discord bot to fetch messages from a specified channel and send AI-generated responses in threads. It ensures smooth message processing and interaction, making it ideal for managing community discussions, customer support, or AI-based engagement. This workflow leverages Redis for memory persistence, ensuring that conversation history is maintained even if the workflow restarts, providing a seamless user experience. How It Works The bot listens for new messages in a specified Discord channel. It sends the messages to an AI model for response generation. The AI-generated reply is posted as a thread under the original message. The bot runs on an Ubuntu server and is managed using PM2 for uptime stability. The Discord bot (Python script) acts as the bridge, capturing messages from Discord and sending them to the n8n webhook. The n8n workflow then processes these messages, interacts with the AI model, and sends the AI's response back to Discord via the bot. Prerequisites to host Bot Sign up on Pella, which is a managed hosting service for Discord Bots. (Easy Setup) A Redis instance for memory persistence. Redis is an in-memory data structure store, used here to store and retrieve conversation history, ensuring that the AI can maintain context across multiple interactions. This is crucial for coherent and continuous conversations. Set Up Steps 1️⃣ Create a Discord Bot Go to the Discord Developer Portal. Click “New Application”, enter a name, and create it. Navigate to Bot > Reset Token, then copy the Bot Token. Enable Privileged Gateway Intents (Presence, Server Members, Message Content). Under OAuth2 > URL Generator, select bot scope and required permissions. Copy the generated URL, open it in a browser, select your server, and click Authorize. 2️⃣ Deploy the Bot on Pella Create a new folder discord-bot and navigate into it: Create and configure an .env file to store your bot token: Copy the code to .env: (You can copy the webhook URL from the n8n workflow) TOKEN=your-bot-token-here WEBHOOK_URL=https://your-domain.tld/webhook/getmessage Create file main.py copy the below code and save it: Copy this Bot script to main.py: import discord import requests import json import os from dotenv import load_dotenv Load environment variables from .env file load_dotenv() TOKEN = os.getenv("TOKEN") WEBHOOK_URL = os.getenv("WEBHOOK_URL") Bot Configuration LISTEN_CHANNELS = ["YOUR_CHANNEL_ID_1", "YOUR_CHANNEL_ID_2"] # Replace with your target channel IDs Intents setup intents = discord.Intents.default() intents.messages = True # Enable message event intents.guilds = True intents.message_content = True # Required to read messages client = discord.Client(intents=intents) @client.event async def on_ready(): print(f'Logged in as {client.user}') @client.event async def on_message(message): if message.author == client.user: return # Ignore bot's own messages if str(message.channel.id) in LISTEN_CHANNELS: try: fetched_message = await message.channel.fetch_message(message.id) # Ensure correct fetching payload = { "channel_id": str(fetched_message.channel.id), # Ensure it's string "chat_message": fetched_message.content, "timestamp": str(fetched_message.created_at), # Ensure proper formatting "message_id": str(fetched_message.id), # Ensure ID is a string "user_id": str(fetched_message.author.id) # Ensure user ID is also string } headers = {'Content-Type': 'application/json'} response = requests.post(WEBHOOK_URL, data=json.dumps(payload), headers=headers) if response.status_code == 200: print(f"Message sent successfully: {payload}") else: print(f"Failed to send message: {response.status_code}, Response: {response.text}") except Exception as e: print(f"Error fetching message: {e}") client.run(TOKEN) Create requirements.txt and copy: discord python-dotenv 3️⃣ Follow the video to set up the bot which will run 24/7 Tutorial - https://www.youtube.com/watch?v=rNnK3XlUtYU Note: Free Plan will expire after 24 hours, so please opt for the Paid Plan in Pella to keep your bot running. 4️⃣ n8n Workflow Configuration The n8n workflow consists of the following nodes: Get Discord Messages (Webhook):** This node acts as the entry point for messages from the Discord bot. It receives the channel_id, chat_message, timestamp, message_id, and user_id from Discord when a new message is posted in the configured channel. Its webhook path is /getmessage and it expects a POST request. Chat Agent (Langchain Agent):** This node processes the incoming Discord message (chat_message). It is configured as a conversational agent, integrating the language model and memory to generate an appropriate response. It also has a prompt to keep the reply concise, under 1800 characters. OpenAI -4o-mini (Langchain Language Model):** This node connects to the OpenAI API and uses the gpt-4o-mini-2024-07-18 model for generating AI responses. It is the core AI component of the workflow. Message History (Redis Chat Memory):** This node manages the conversation history using Redis. It stores and retrieves chat messages, ensuring the Chat Agent maintains context for each user based on their user_id. This is critical for coherent multi-turn conversations. Calculator (Langchain Tool):** This node provides a calculator tool that the AI agent can utilize if a mathematical calculation is required within the conversation. This expands the capabilities of the AI beyond just text generation. Response fromAI (Discord):** This node sends the AI-generated response back to the Discord channel. It uses the Discord Bot API credentials and replies in a thread under the original message (message_id) in the specified channel_id. Sticky Note1, Sticky Note2, Sticky Note3, Sticky Note4, Sticky Note5, Sticky Note:** These are informational nodes within the workflow providing instructions, code snippets for the Discord bot, and setup guidance for the user. These notes guide the user on setting up the .env file, requirements.txt, the Python bot code, and general recommendations for channel configuration and adding tools. 5️⃣ Setting up Redis Choose a Redis Hosting Provider: You can use a cloud provider like Redis Labs, Aiven, or set up your own Redis instance on a VPS. Obtain Redis Connection Details: Once your Redis instance is set up, you will need the host, port, and password (if applicable). Configure n8n Redis Nodes: In your n8n workflow, configure the "Message History" node with your Redis connection details. Ensure the Redis credential ✅ redis-for-n8n is properly set up with your Redis instance details (host, port, password). 6️⃣ Customizing the Template AI Model:** You can easily swap out the "OpenAI -4o-mini" node with any other AI service supported by n8n (e.g., Cohere, Hugging Face) to use a different language model. Ensure the new language model node is connected to the ai_languageModel input of the "Chat Agent" node. Agent Prompt:** Modify the text parameter in the "Chat Agent" node to change the AI's persona, provide specific instructions, or adjust the response length. Additional Tools:** The "Calculator" node is an example of an AI tool. You can add more Langchain tool nodes (e.g., search, data lookup) and connect them to the ai_tool input of the "Chat Agent" node to extend the AI's capabilities. Refer to the "Sticky Note5" in the workflow for a reminder. Channel Filtering:** Adjust the LISTEN_CHANNELS list in the main.py file of your Discord bot to include or exclude specific Discord channel IDs where the bot should listen for messages. Thread Management:** The "Response fromAI" node can be modified to change how threads are created or managed, or to send responses directly to the channel instead of a thread. The current setup links the response to the original message ID (message_reference). 7️⃣ Testing Instructions Start the Discord Bot: Ensure your main.py script is running on Pella. Activate the n8n Workflow: Make sure your n8n workflow is active and listening for webhooks. Send a Message in Discord: Go to one of the LISTEN_CHANNELS in your Discord server and send a message. Verify Response: The bot should capture the message, send it to n8n, receive an AI-generated response, and post it as a thread under your original message. Check Redis: Verify that the conversation history is being stored and updated correctly in your Redis instance. Look for keys related to user IDs. ✅ Now your bot is running in the background! 🚀
by Srinivasan KB
This n8n workflow provides a ready-to-use API endpoint for extracting structured data from images. It processes an image URL using an AI-powered OCR model and returns the extracted details in a structured JSON format. Use Cases Document OCR** – Extract details from ID cards, invoices, receipts, etc. Text Extraction from Images** – Process screenshots, scanned documents, and photos. Automated Form Processing** – Digitize and capture information from paper forms. Business Card Data Extraction** – Extract names, emails, and phone numbers from business cards. How It Works Send a GET request with an image URL and define the required extraction parameters. The image is converted to base64 for processing. The AI model (Gemini API - Flash Lite) extracts relevant text. The response returns structured JSON data containing only the requested fields. Features ✔️ No-Code API Setup – Easily integrate into any application. ✔️ Customizable Extraction – Modify the request parameters to fit your needs. ✔️ AI-Powered OCR – Uses advanced models for accurate text recognition. ✔️ Automated Processing – Ideal for document processing and digitization. Integration Works with any frontend/backend system that supports API calls. Can be used for workflow automation in CRM, ERP, and document management solutions. Supports further customization based on specific OCR requirements.
by Aurélien P.
🌤️ Daily Weather Forecast Bot A comprehensive n8n workflow that fetches detailed weather forecasts from OpenWeatherMap and sends beautifully formatted daily summaries to Telegram. 📋 Features 📊 Daily Overview**: Complete temperature range, rainfall totals, and wind conditions ⏰ Hourly Forecast**: Weather predictions at key times (9AM, 12PM, 3PM, 6PM, 9PM) 🌡️ Smart Emojis**: Context-aware weather icons and temperature indicators 💡 Smart Recommendations**: Contextual advice (umbrella alerts, clothing suggestions, sun protection) 🌪️ Enhanced Details**: Feels-like temperature, humidity levels, wind speed, UV warnings 📱 Rich Formatting**: HTML-formatted messages with emojis for excellent readability 🕐 Timezone-Aware**: Proper handling of Luxembourg timezone (CET/CEST) 🛠️ What This Workflow Does Triggers daily at 7:50 AM to send morning weather updates Fetches 5-day forecast from OpenWeatherMap API with 3-hour intervals Processes and analyzes weather data with smart algorithms Formats comprehensive report with HTML styling and emojis Sends to Telegram with professional formatting and actionable insights ⚙️ Setup Instructions 1. OpenWeatherMap API Sign up at OpenWeatherMap Get your free API key (1000 calls/day included) Replace API_KEY in the HTTP Request node URL 2. Telegram Bot Message @BotFather on Telegram Send /newbot command and follow instructions Copy the bot token to n8n credentials Get your chat ID by messaging the bot, then visiting: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates Update the chatId parameter in the Telegram node 3. Location Configuration Default location: Strassen, Luxembourg To change: modify q=Strassen in the HTTP Request URL Format: q=CityName,CountryCode (e.g., q=Paris,FR) 🎯 Technical Details API Source**: OpenWeatherMap 5-day forecast Schedule**: Daily at 7:50 AM (configurable) Format**: HTML with rich emoji formatting Error Handling**: 3 retry attempts with 5-second delays Rate Limits**: Uses only 1 API call per day Timezone**: Europe/Luxembourg (handles CET/CEST automatically) 📊 Weather Data Analyzed Temperature ranges and "feels like" temperatures Precipitation forecasts and accumulation Wind speed and conditions Humidity levels and comfort indicators Cloud coverage and visibility UV index recommendations Time-specific weather patterns 💡 Smart Features Conditional Recommendations**: Only shows relevant advice Night/Day Awareness**: Different emojis for time of day Temperature Context**: Color-coded temperature indicators Weather Severity**: Appropriate icons for weather intensity Humidity Comfort**: Comfort level indicators Wind Analysis**: Descriptive wind condition text 🔧 Customization Options Schedule**: Modify trigger time in the Schedule node Location**: Change city in HTTP Request URL Forecast Hours**: Adjust desiredHours array in the code Temperature Thresholds**: Modify emoji temperature ranges Recommendation Logic**: Customize advice triggers 📱 Sample Output 🌤️ Weather Forecast for Strassen, LU 📅 Monday, 2 June 2025 📊 Daily Overview 🌡️ Range: 12°C - 22°C 💧 Comfortable (65%) ⏰ Hourly Forecast 🕒 09:00 ☀️ 15°C 🕒 12:00 🌤️ 20°C 🕒 15:00 ☀️ 22°C (feels 24°C) 🕒 18:00 ⛅ 19°C 🕒 21:00 🌙 16°C 📡 Data from OpenWeatherMap | Updated: 07:50 CET 🚀 Getting Started Import this workflow to your n8n instance Add your OpenWeatherMap API key Set up Telegram bot credentials Test manually first Activate for daily automated runs 📋 Requirements n8n instance (cloud or self-hosted) Free OpenWeatherMap API account Telegram bot token Basic understanding of n8n workflows Perfect for: Daily weather updates, team notifications, personal weather tracking, smart home automation triggers.
by Jimleuk
This n8n template demonstrates how to get started with Gemini 2.0's new Bounding Box detection capabilities in your workflows. The key difference being this enables prompt-based object detection for images which is pretty powerful for things like contextual search over an image. eg. "Put a bounding box around all adults with children in this image" or "Put a bounding box around cars parked out of bounds of a parking space". How it works An image is downloaded via the HTTP node and an "Edit Image" node is used to extract the file's width and height. The image is then given to the Gemini 2.0 API to parse and return coordinates of the bounding box of the requested subjects. In this demo, we've asked for the AI to identify all bunnies. The coordinates are then rescaled with the original image's width and height to correctl align them. Finally to measure the accuracy of the object detection, we use the "Edit Image" node to draw the bounding boxes onto the original image. How to use Really up to the imagination! Perhaps a form of grounding for evidence based workflows or a higher form of image search can be built. Requirements Google Gemini for LLM Customising the workflow This template is just a demonstration of an experimental version of Gemini 2.0. It is recommended to wait for Gemini 2.0 to come out of this stage before using in production.
by n8n Team
This workflow creates/updates/deletes a Notion database page when an issue is created/updated/deleted in Jira. Subsequent updates to the issue's title or status in Jira are updated in the Notion database. If you require more fields to send to Notion, this template is easily extendible which will be described in setup. The Notion database will require setup before the workflow can be used. Prerequisites Notion account and Notion credentials. Jira account and Jira credentials. How it works When a new issue is created in Jira, the workflow creates a new page in the Notion database will all the required fields. When the issue's title or status is updated in Jira, the workflow updates the specific Notion database page identified by the "Issue Key" field in Notion. If the status in Jira is set to "Done", the workflow will mark the Notion database page "Done" field as true. When the issue is deleted in Jira, the workflow archives the Notion database page. Setup This workflow requires that you set up a Notion database. To do so, follow the steps below: In Notion, create a new database. Add the following columns to the database: Done (with type "Checkbox") Title (renamed from "Name") Status (with the following options: "To Do", "In Progress", "Done") Link (with type "URL") Issue ID (with type "Number") Issue Key (with type "Text") Add any other fields you require to the database. Your database should look something like this Share the database to n8n. By default, the workflow will fill all the fields provided above, except for any other additional fields you add.
by Niklas Hatje
Use case When collecting leads via a form you're typically facing a few problems: Often end up with a bunch of leads who don't have a valid email address You want to know as much about the new lead as possible but also want to keep the form short After forms are submitted you have to walk over the submissions and see which you want to add to your CRM This workflow helps you to fix all those problems. What this workflow does The workflow checks every new form submission and verifies the email using Hunter.io. If the email is valid, it then tries to enrich the person using Clearbit and saves the new lead into your Hubspot CRM. Setup Add you Hunter, Clearbit and Hubspot credentials Click the Test Workflow button, enter your email and check your Hubspot Activate the workflow and use the form trigger production URL to collect your leads in a smart way How to adjust it to your needs Change the form to the form you need in your use case (e.g. Typeform, Google Forms, SurveyMonkey etc.) Add criteria before an account is added to your CRM. This could for example be the size of company, industry etc. You can find some inspiration in our other template Reach out via Email to new form submissions that meet a certain criteria Add more data sources to save the new lead in
by AdrianWang
How it works This workflow automates the conversion of various document formats (such as PDF, Word, and PPT) into Markdown. It connects to the MinerU API service, which leverages OCR, formula, and table recognition to produce high-quality output. Users can initiate the process by simply uploading a document through an n8n chat interface. Set up steps Ensure you have a local n8n instance running. Set up and run the MinerU MCP (MinerU Computing Platform) server locally. Import this workflow into your n8n instance. Configure your AI model credentials (e.g., for OpenAI, add your API Key and Base URL). Click the "Write Files from Disk" node and edit the file path to your desired local save location. Click the "MCP Client" node and input your MinerU MCP server address (e.g., http://localhost:8000/sse). Click the "Open Chat" button to upload a file, send a message, and test the workflow.