by Angel Menendez
Enhance Query Resolution with the Knowledge Base Tool! Our KB Tool - Confluence KB is crafted to seamlessly integrate into the IT Ops AI SlackBot Workflow, enhancing the IT support process by enabling sophisticated search and response capabilities via Slack. Workflow Functionality: Receive Queries**: Directly accepts user queries from the main workflow, initiating a dynamic search process. AI-Powered Query Transformation**: Utilizes OpenAI's models or local ai to refine user queries into searchable keywords that are most likely to retrieve relevant information from the Knowledge Base. Confluence Integration**: Executes searches within Confluence using the refined keywords to find the most applicable articles and information. Deliver Accurate Responses**: Gathers essential details from the Confluence results, including article titles, links, and summaries, preparing them to be sent back to the parent workflow for final user response. To view a demo video of this workflow in action, click here. Quick Setup Guide: Ensure correct configurations are set for OpenAI and Confluence API integrations. Customize query transformation logic as per your specific Knowledge Base structure to improve search accuracy. Need Help? Dive into our Documentation or get support from the Community Forum! Deploy this tool to provide precise and informative responses, significantly boosting the efficiency and reliability of your IT support workflow.
by Jimleuk
This n8n workflow shows how using multimodal LLMs with AI vision can tackle tricky image validation tasks which are near impossible to achieve with code and often impractical to be done by humans at scale. You may need image validation when users submitted photos or images are required to meet certain criteria before being accepted. A wine review website may require users only submit photos of wine with labels, a bank may require account holders to submit scanned documents for verification etc. In this demonstration, our scenario will be to analyse a set of portraits to verify if they meet the criteria for valid passport photos according to the UK government website (https://www.gov.uk/photos-for-passports). How it works Our set of portaits are jpg files downloaded from our Google Drive using the Google Drive node. Each image is resized using the Edit Image node to ensure a balance between resolution and processing speed. Using the Basic LLM node, we'll define a "user message" option with the type of binary (data). This will allow us to pass our portrait to the LLM as an input. With our prompt containing the criteria pulled off the passport photo requirements webpage, the LLM is able to validate the photo does or doesn't meet its criteria. A structured output parser is used to structure the LLM's response to a JSON object which has the "is_valid" boolean property. This can be useful to further extend the workflow. Requirements Google Gemini API key Google Drive account Customising this workflow Not using Gemini? n8n's LLM node works with any compatible multimodal LLM so feel free to swap Gemini out for OpenAI's GPT4o or Antrophic's Claude Sonnet. Don't need to validate portraits? Try other use cases such as document classification, security footage analysis, people tagging in photos and more.
by Jimleuk
This n8n workflow demonstrates how we can use Multimodal LLMs to parse and extract from PDF documents in n8n. In this particular scenario, we're passing a candidate's CV/resume to an AI which filters out unqualified applications. However, this sneaky candidate has added in hidden prompt to bypass our bot! Whatever will we do? No fret, using AI Vision is one approach to solve this problem... read on! How it works Our candidate's CV/Resume is a PDF downloaded via Google Drive for this demonstration. The PDF is then converted into an image PNG using a tool called Stirling PDF. Since the hidden prompt has a white font color, it is is invisible in the converted image. The image is then forwarded to a Basic LLM node to process using our multimodal model - in this example, we'll use Google's Gemini 1.5 Pro. In the Basic LLM node, we'll need to set a User Message with the type of Binary. This allows us to directly send the image file in our request. The LLM is now immune to the hidden prompt and its response is has expected. The example CV/Resume with hidden prompt can be found here: https://drive.google.com/file/d/1MORAdeev6cMcTJBV2EYALAwll8gCDRav/view?usp=sharing Requirements Google Gemini API Key. Alternatively, GPT4 will also work for this use-case. Stirling PDF or another service which can convert PDFs into images. Note for data privacy, this example uses a public API and it is recommended that you self-host and use a private instance of Stirling PDF instead. Customising the workflow Swap out the manual trigger for another trigger such as a webhook to integrate into your existing services. This example demonstrates a validation use-case ie. "does the candidate look qualified?". You can try additionally extracting data points instead such as years of experiences, previous companies etc.
by felipe biava cataneo
What this template does This template uses GROQ LLAVA V1.5 7B API that offers fast inference for multimodal models with vision capabilities for understanding and interpreting visual data from images. . The users send a image and get a description of the image from the model. Setup Open the Telegram app and search for the BotFather user (@BotFather) Start a chat with the BotFather Type /newbot to create a new bot Follow the prompts to name your bot and get a unique API token Save your access token and username Once you set your bot, you can send the image, and get the descriptions.
by Juan Carlos Cavero Gracia
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This automation template is designed for content curators, marketers, and anyone looking to supercharge their content sharing strategy. It transforms any web article, blog post, or news link into a series of platform-specific social media posts, generated by AI. It also captures a live screenshot of the webpage to use as the post image, automating the entire process of publishing them across X (Twitter), LinkedIn, Threads, and Reddit. Note: The default example is configured to share n8n templates, but this workflow can promote any web page, article, or news story. Just change the URL! The upload-post node only works for self-hosted n8n instances, but you can use the standard http node for uploading the content* Who Is This For? Content Curators & Marketers:** Effortlessly share valuable industry news and articles with tailored messages and visuals for each audience. Social Media Managers:** Keep your social feeds consistently active with relevant, high-quality content without the manual overhead. Community Builders & Brand Evangelists:** Quickly disseminate product updates, tutorials, and blog posts to your community on all relevant platforms. Professionals & Thought Leaders:** Build your personal brand by easily sharing insightful articles with automated visuals, adding your unique perspective. What Problem Does This Workflow Solve? Sharing a single piece of content across multiple social platforms is tedious. You need to manually write unique posts, create visuals, and then publish everything. This workflow addresses these challenges by: Automating Content Creation:** Uses a powerful AI agent (Google Gemini) to read any URL and write compelling, unique posts for each social network. Generating Visuals Automatically:** Captures a high-quality screenshot of the source webpage to use as a visually appealing image in your posts, increasing engagement. Ensuring Platform-Specific Tone:** The AI is instructed to generate professional posts for LinkedIn, concise threads for X, conversational updates for Threads, and community-focused posts for Reddit. One-Click Distribution:** Takes a single URL as input and handles the entire content creation and sharing process across multiple platforms automatically. How It Works Input a URL: In the "Set Input Data" node, simply paste the URL of the article or page you want to share. AI Analysis & Generation: The workflow sends the URL to the AI agent, which scrapes the content and generates four distinct, ready-to-publish posts. Screenshot Generation: At the same time, it uses the ScreenshotOne service to capture a high-quality image of the provided URL. Cross-Platform Publishing: The generated content and the screenshot are automatically sent to the corresponding nodes to be posted on X, LinkedIn, and Threads, while the text-only version is sent to Reddit. Setup AI Model Credentials: Add your Google Gemini API key to the Google Gemini Chat Model node to power the AI agent. Screenshot Service (ScreenshotOne): The workflow uses ScreenshotOne to generate images for your posts. Create a free account at screenshotone.com to get your own API key. The free plan includes 100 screenshots per month. In the Upload Post X, Upload Post LinkedIn, and Upload Post Threads nodes, go to the Photos parameter (under Additional Fields) and replace the existing access_key in the URL with your own. Upload-Post Account: This workflow uses upload-post.com for multi-platform posting. Create a free account at upload-post.com to get your API Token and User ID. Add the credentials in the Upload Post X, Upload Post LinkedIn, and Upload Post Threads nodes. Reddit Credentials: Connect your Reddit account using OAuth2 in the Reddit node to enable posting. Customize the AI: (Optional) Edit the prompt in the Social Media Agent node to match your content. The default prompt is optimized for sharing n8n templates, but you can easily adapt it for any topic to fit your brand's voice and style. Requirements Accounts:** n8n, Google (for Gemini API), ScreenshotOne, upload-post.com, Reddit. API Keys & Credentials:** Google Gemini API Key, ScreenshotOne API Key, Upload-post.com API Token & User ID, Reddit OAuth2 credentials. Use this template to become a content-sharing powerhouse, saving hours of work while increasing your reach and engagement across the web.
by Yaron Been
🚀 Automated Funding Intelligence: CrunchBase to Google Sheets Tracking Workflow! Workflow Overview This cutting-edge n8n automation is a sophisticated startup funding intelligence tool designed to transform market research into actionable insights. By intelligently connecting CrunchBase, data processing, and Google Sheets, this workflow: Discovers Funding Opportunities: Automatically retrieves latest funding rounds Tracks industry-specific investments Eliminates manual market research efforts Intelligent Data Processing: Filters funding data by location and industry Extracts key investment metrics Ensures comprehensive market intelligence Seamless Data Logging: Automatically updates Google Sheets Creates real-time investment database Enables rapid market trend analysis Scheduled Intelligence Gathering: Daily automated tracking Consistent market insight updates Zero manual intervention required Key Benefits 🤖 Full Automation: Zero-touch funding research 💡 Smart Filtering: Targeted investment insights 📊 Comprehensive Tracking: Detailed funding intelligence 🌐 Multi-Source Synchronization: Seamless data flow Workflow Architecture 🔹 Stage 1: Funding Discovery Scheduled Trigger**: Daily market scanning CrunchBase API Integration** Intelligent Filtering**: Location-based selection Industry-specific focus Most recent funding rounds 🔹 Stage 2: Data Extraction Comprehensive Metadata Parsing** Key Information Retrieval** Structured Data Preparation** 🔹 Stage 3: Data Logging Google Sheets Integration** Automatic Row Appending** Real-Time Database Updates** Potential Use Cases Venture Capitalists**: Investment opportunity tracking Startup Scouts**: Market trend analysis Market Researchers**: Comprehensive funding insights Investors**: Strategic decision support Business Strategists**: Competitive landscape monitoring Setup Requirements CrunchBase API API credentials Configured access permissions Funding round tracking setup Google Sheets Connected Google account Prepared tracking spreadsheet Appropriate sharing settings n8n Installation Cloud or self-hosted instance Workflow configuration API credential management Future Enhancement Suggestions 🤖 Advanced investment trend analysis 📊 Multi-source funding aggregation 🔔 Customizable alert mechanisms 🌐 Expanded industry coverage 🧠 Machine learning insights generation Technical Considerations Implement robust error handling Use secure API authentication Maintain flexible data processing Ensure compliance with API usage guidelines Ethical Guidelines Respect business privacy Use data for legitimate research Maintain transparent information gathering Provide proper attribution Hashtag Performance Boost 🚀 #StartupFunding #InvestmentIntelligence #MarketResearch #AIWorkflow #DataAutomation #VentureCapital #TechInnovation #InvestmentTracking #BusinessIntelligence #StartupEcosystem Workflow Visualization [Daily Trigger] ⬇️ [Fetch Funding Rounds] ⬇️ [Extract & Format Data] ⬇️ [Log to Google Sheets] Connect With Me Ready to revolutionize your funding intelligence? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your market research with intelligent, automated workflows!
by Lucas Peyrin
How it works This workflow demonstrates a fundamental pattern for securing a webhook by requiring an API key. It acts as a gatekeeper, checking for a valid key in the request header before allowing the request to proceed. Incoming Request: The Secured Webhook node receives an incoming POST request. It expects an API key to be sent in the x-api-key header. API Key Verification: The Check API Key node takes the key from the incoming request's header. It then makes an internal HTTP request to a second webhook (Get API Key) which acts as a mock database. This second webhook retrieves a list of registered API keys (from the Registered API Keys node) and filters it to find a match for the key that was provided. Conditional Response: If a match is found, the API Key Identified node routes the execution to the "success" path, returning a 200 OK response with the identified user's ID. If no match is found, it routes to the "unauthorized" path, returning a 401 Unauthorized error. This pattern separates the public-facing endpoint from the data source, which is a good security practice. Set up steps Setup time: ~2 minutes This workflow is designed to be a self-contained example. Set up Credentials: This workflow uses "Header Auth" for its internal communication. Go to Credentials and create a new Header Auth credential. You can use any name and value (e.g., Name: X-N8N-Auth, Value: my-secret-password). Select this credential in all four webhook/HTTP Request nodes. Add Your API Keys: Open the Registered API Keys node. This is your mock database. Edit the array to include the user_id and api_key pairs you want to authorize. Activate the workflow. Test it: Use the Test Secure Webhook node to send a request. Try it with a valid key from your list to see the success response. Change the x-api-key header to an invalid key to see the 401 Unauthorized error. For Production: Replace the mock database part of this workflow (the Get API Key webhook and Registered API Keys node) with a real database node like Supabase, Postgres, or Baserow to look up keys.
by Yaron Been
Automated system to track and analyze technology stacks used by target companies, helping identify decision-makers and technology trends. 🚀 What It Does Tracks technology stack of target companies Identifies key decision-makers (CTOs, Tech Leads) Monitors technology changes and updates Provides competitive intelligence Generates actionable insights 🎯 Perfect For B2B SaaS companies Technology vendors Sales and business development teams Competitive intelligence analysts Market researchers ⚙️ Key Benefits ✅ Identify potential customers ✅ Stay ahead of technology trends ✅ Target decision-makers effectively ✅ Monitor competitor technology stacks ✅ Data-driven sales strategies 🔧 What You Need BuiltWith API key n8n instance CRM integration (optional) Email/Slack for alerts 📊 Data Tracked Company technologies Hosting providers Frameworks and libraries Analytics tools Marketing technologies 🛠️ Setup & Support Quick Setup Deploy in 20 minutes with our step-by-step guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Gain a competitive edge by understanding the technology landscape of your target market.
by Agentick AI
This n8n workflow automatically scrapes the latest posts from a specified Reddit subreddit every day at 9 AM and sends a neatly formatted HTML email summary to your inbox. It highlights new community posts, including post details like title, author, flair, upvotes, comments, and a brief preview — making it ideal for content curators, community managers, or Reddit enthusiasts who want daily updates. How It Works Trigger: The schedule node runs the workflow once every 24 hours at 9:00 AM. Reddit Scrape: A request is made to the desired subreddit (defined in the HTTP Request node) to pull post data. Filter & Format: JavaScript code filters posts created in the last 24 hours and transforms the data into structured summaries. Email Composition: A dynamic HTML email is generated summarizing the post details. If no new posts are found, a fallback message is displayed. Email Delivery: Gmail node sends the email with subject, content, and timestamp. Use Cases ✅ Stay informed about the latest subreddit activity. ✅ Automate daily newsletters for Reddit topics. ✅ Monitor niche communities for engagement trends. Requirements Reddit subreddit link (set in the HTTP Request node). Gmail account with OAuth2 credentials set up in n8n. User-Agent string customized for your Reddit scraping. Adjust schedule as per your preferred timezone. Google Sheet Setup (Not required for this workflow) No sheet integration is involved here. Customizing the Workflow You can personalize this workflow by: Replacing the User-Agent value with a meaningful identifier to avoid Reddit rate-limiting. Updating the subreddit URL in the HTTP Request node. Changing the Gmail recipient address in the Send Gmail node. Tweaking the HTML email styling in the Prepare Email Content node. Adjusting schedule time/frequency in the Trigger node.
by Yar Malik (Asfandyar)
How it works Trigger: Listens for an incoming chat message Copy Assistant: Feeds the message (plus memory) into an OpenAI Chat Model and exposes two “tools” Cold Email Writer Tool Sales Letter Tool• Tool execution: Depending on the user’s intent, the appropriate tool generates the copy • Save output: Writes the generated email or sales letter into your target document via the Update a document node Set up steps • Configure your OpenAI Chat Model credentials in n8n (no hard-coded keys!) • Add and authenticate the Simple Memory credential (to keep context across messages) • Create Google Docs (or MS Word) credentials for the Update a document node • Ensure your Chat trigger is pointing at your incoming-message endpoint • Mandatory: Drop sticky-note annotations on each tool node explaining where to enter API keys and how to tweak prompts Once everything’s wired up, send a test chat message like “Write me a cold email for a fintech startup” and watch the workflow spin up a polished draft in your document. How to use Import the workflow JSON into n8n. Configure your Chat trigger (webhook or form) to receive incoming messages. Send a chat prompt like: “Write me a cold email for a B2B SaaS offering.” The “Copy Assistant” custom GPT picks the right tool (Cold Email or Sales Letter). Generated copy is written directly into your linked Google Doc or Word document. Requirements OpenAI API Key (with Chat Completions & Custom GPTs enabled) Custom Assistant created in your ChatGPT dashboard (Assistant ID pasted into the Chat Model node) n8n instance (Cloud or self-hosted) with credentials set up for: Simple Memory (to persist context) Google Docs or Microsoft Word (for document output) Customising this workflow Tweak system and user prompts inside the Copy Assistant node to fit your brand voice. Swap in Slack, Teams or email nodes instead of a document writer to deliver copy where you need it. Add or remove tools (e.g., “Follow-up Email Writer”) by duplicating the existing tool pattern. Use sticky-note annotations on every node to explain where to enter API keys, Assistant IDs, or prompt tweaks.
by Aditya Gaur
Who is this template for? This template is for teams and administrators who use n8n to monitor Elastic alerts and want to receive automated email notifications when an alert is triggered. It leverages Microsoft Graph API to send emails and provides an efficient way to notify users about alerts directly in their inbox. How it works? The template connects to the Elastic API to retrieve alert data. When a new alert is detected, the workflow processes the alert content and sends an email notification via Microsoft Graph API. The email includes alert details such as the alert name, timestamp, severity, and a summary of the message, allowing for quick action or review. Setup steps Step 1: Set up OAuth2 Credentials in n8n for Microsoft Graph API with Mail.Send permission. Step 2: Configure your Elastic API endpoint in the HTTP Request node to retrieve alerts. Step 3: Modify the email recipients in the template to specify who will receive the alert notifications. Step 4: Customize the email format, if necessary, to include additional alert details or adjust the message.
by Praveena
Idea The idea for app came since I wanted to build a unique gift for my niece because she gets excited for her birthday (which Im going to miss this year). The web app has a simple countdown (in html and JS) but more importantly, there is an AI agent that will answer some specific questions and know her preferences. How it works The questions from app are sent via web hook to N8N which has pulls preferences file (about her likes, dislikes, personality) from postgre and AI Agent that will answer questions/respond. The current status is stored back in postgre (especially about status of cat and universe happenings) before responding back. Features Integrated AI chatbot via N8N webhook Persistent conversation history Minimizable chat interface Fallback support for offline testing Features: -- Wheres Mittens - This is a query to track her lost cat in multiverse. -- Multiverse updates with recent update stored Pre Requisites Postgre SQL database is available. Alternatively, use any other database but change the N8N nodes. LLM Api Key. Step by Step Instructions Export this N8N Workflow. Modify LLM API Key, I used openAI, 4.1 For web app scofflding,you will need Node, HTML and Javascript. I've created a mini version using Node and JS with web app and N8N connection settings here: <https://github.com/productiser/FiBirthdayAgent> PostgreSQL Database Script (1 table for memory and context storage): CREATE TABLE fifi_world_context ( id TEXT PRIMARY KEY, -- e.g., 'agent_fifi' cat_location TEXT, -- e.g., "Bubble Nebula" cat_activity TEXT, -- e.g., "Playing laser tag with moon mice" fifi_preferences JSONB, -- e.g., likes/dislikes/foods/shows world_history TEXT, -- Summary of narrative events last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); 5.Modify system prompt as per your needs. Built With N8N Self hosted Self hosted web app Hosted on Vercel Total spend = <£1 (AI costs only) Total Time = <1 day Support Watch this video for web app overview and how it looks. <https://youtu.be/e7PlrTdvwoM> Contact me on info@pankstr.com/ superllmuser@gmail.com for any queries Hope you enjoy!!