by Jimleuk
This n8n template shows you how to connect Github's Free Models to your existing n8n AI workflows. Whilst it is possible to use HTTP nodes to access Github Models, The aim of this template is to use it with existing n8n LLM nodes - saves the trouble of refactoring! Please note, Github states their model APIs are not intended for production usage! If you need higher rate limits, you'll need to use a paid service. How it works The approach builds a custom OpenAI compatible API around the Github Models API - all done in n8n! First, we attach an OpenAI subnode to our LLM node and configure a new OpenAI credential. Within this new OpenAI credential, we change the "Base URL" to point at a n8n webhook we've prepared as part of this template. Next, we create 2 webhooks which the LLM node will now attempt to connect with: "models" and "chat completion". The "models" webhook simply calls the Github Model's "list all models" endpoint and remaps the response to be compatible with our LLM node. The "Chat Completion" webhook does a similar task with Github's Chat Completion endpoint. How to use Once connected, just open chat and ask away! Any LLM or AI agent node connected with this custom LLM subnode will send requests to the Github Models API. Allowing your to try out a range of SOTA models for free. Requirements Github account and credentials for access to Models. If you've used the Github node previously, you can reuse this credential for this template. Customising this workflow This template is just an example. Use the custom OpenAI credential for your other workflows to test Github models. References https://docs.github.com/en/github-models/prototyping-with-ai-models https://docs.github.com/en/github-models
by Jimleuk
Mistral OCR is a super convenient way to parse and extract data from multi-page PDFs or single images using AI. What makes it special and differs it from the competition is that Mistral OCR also performs document page splitting and markdown conversion. This helps reduce dependencies required for document parsing workflows where tools like StirlingPDF. Read the official documentation on Mistral OCR API here: https://docs.mistral.ai/capabilities/document/#tag/ocr/operation/ocr_v1_ocr_post How it works To access Mistral-OCR, you'll need to use Mistral Cloud API via the HTTP request node Mistral OCR can only accept 2 file types: PDF and Image. Here, we use 2 different request to the Mistral-OCR API to parse a bank statement PDF and an screenshot of a bank statement to extract the tables. Next, we explore a more secure method of uploading documents to the Mistral OCR API by using Mistral's cloud storage. In example 2, we first store a copy of our documents to Mistral cloud and then generate a signed URL to retreive the file before sending it to Mistral OCR. This ensures the file is not accessible publicly and protects it from unauthorised access. Finally, another way to use Mistral-OCR is via document understanding. This allows you to ask questions about the document rather than extract contents from it. In example 3, I demonstrate this use-case asking Mistral-small to tell me how many deposits are shown in the bank statement. How to use Ensure your documents are either publicly accessible for Mistral-OCR or upload them to Mistral Cloud. Alternatively, signed urls from AWS S3 or Cloudflare R2 should also work. Requirements Mistral Cloud account and API Key. You'll also need credit on your account to use Mistral-OCR. Customising the workflow Mistral-OCR also works for images such as charts and diagrams so try using it on Financial Reports. Mistral-OCR is even cheaper with batching enabled. This returns your results within 24hrs but is half the price per page.
by Agent Studio
Overview This workflow allows you to trigger custom logic in n8n directly from Retell's Voice Agent using Custom Functions. It captures a POST webhook from Retell every time a Voice Agent reaches a Custom Function node. You can plug in any logic—call an external API, book a meeting, update a CRM, or even return a dynamic response back to the agent. Who is it for For builders using Retell who want to extend Voice Agent functionality with real-time custom workflows or AI-generated responses. Prerequisites Have a Retell AI Account A Retell agent with a Custom Function node in its conversation flow (see template below) Set your n8n webhook URL in the Custom Function configuration (see "How to use it" below) (Optional) Familiarity with Retell's Custom Function docs Start a conversation with the agent (text or voice) Retell Agent Example To get you started, we've prepared a Retell Agent ready to be imported, that includes the call to this template. Import the agent to your Retell workspace (top-right button on your agent's page) You will need to modify the function URL in order to call your own instance. This template is a simple hotel agent that calls the custom function to confirm a booking, passing basic formatted data. How it works Retell sends a webhook to n8n whenever a Custom Function is triggered during a call (or test chat). The webhook includes: Full call context (transcript, call ID, etc.) Parameters defined in the Retell function node You can process this data and return a response string back to the Voice Agent in real-time. How to use it Copy the webhook URL (e.g. https://your-instance.app.n8n.cloud/webhook/hotel-retell-template) Modify the Retell Custom Function webhook URL (see template description for screenshots) Edit the function Modify the URL Modify the logic in the Set node or replace it with your own custom flow Deploy and test: Retell will hit your n8n workflow during the conversation Extension Ideas Call a third-party API to fetch data (e.g. hotel availability, CRM records) Use an LLM node to generate dynamic responses Trigger a parallel automation (Slack message, calendar invite, etc.) 👉 Reach out to us if you're interested in analyzing your Retell Agent conversations.
by Thomas Janssen
Build a 100% local RAG with n8n, Ollama and Qdrant. This agent uses a semantic database (Qdrant) to answer questions about PDF files. Tutorial Click here to view the YouTube Tutorial How it works Build a chatbot that answers based on documents you provide it (Retrieval Augmented Generation). You can upload as many PDF files as you want to the Qdrant database. The chatbot will use its retrieval tool to fetch the chunks and use them to answer questions. Installation Install n8n + Ollama + Qdrant using the Self-hosted AI starter kit Make sure to install Llama 3.2 and mxbai-embed-large as embeddings model. How to use it First run the "Data Ingestion" part and upload as many PDF files as you want Run the Chatbot and start asking questions about the documents you uploaded
by Niklas Hatje
Use Case When building a product it's important to discover and eliminate bugs as quickly as possible. Since we're using our product at n8n a lot, we wanted to make it as easy as possible for everyone to add bugs with the needed level of detail. That's why we built this workflow that allows everyone to add bugs to our Linear account easily directly from Slack What this workflow does This workflow waits for a webhook call within Slack, that gets fired when users use the /bug command on a bot that you will create as part of this template. It then adds the bug to Linear using a pre-defined description and a defined label. It then notifies the user about the newly added bug as you can see below: How to create your Slack bot Visit https://api.slack.com/apps, click on New App and choose a name and workspace. Click on OAuth & Permissions and scroll down to Scopes -> Bot token Scopes Add the chat:write scope Head over to Slash Commands and click on Create New Command Use /bug as the command Copy the test URL from the Webhook node into Request URL Add whatever feels best to the description and usage hint Go to Install app and click install Setup Configure your Slack bot using the sticky to the left Fill the Set me up node. You can find the IDs easily using the Helper nodes section Make sure to exchange the Request URL in your Slack with the Prod URL of the Webhook node before activating this workflow How to adjust it to your needs Add zero, one, two or many labels when creating the new ticket Change the Slack message according to your needs Change the default description for a new bug ticket Rename the Slack command as it works best for you How to enhance this workflow At n8n we use this workflow in combination with some others. E.g. we have the following things on top: We're using AI to classify the bugs and move them to the right team as soon as they get added to Linear (see this template) We also added other commands like /pain and /idea that allow us to submit other things to Notion. You can see the workflow for that here.
by Jimleuk
This n8n workflow demonstrates how to automate indexing of images to build a object-based image search. By utilising a Detr-Resnet-50 Object Classification model, we can identify objects within an image and store these associations in Elasticsearch along with a reference to the image. How it works An image is imported into the workflow via HTTP request node. The image is then sent to Cloudflare's Worker AI API where the service runs the image through the Detr-Resnet-50 object classification model. The API returns the object associations with their positions in the image, labels and confidence score of the classification. Confidence scores of less the 0.9 are discarded for brevity. The image's URL and its associations are then index in an ElasticSearch server ready for searching. Requirements A Cloudflare account with Workers AI enabled to access the object classification model. An ElasticSearch instance to store the image url and related associations. Extending this workflow Further enrich your indexed data with additional attributes or metrics relevant to your users. Use a vectorstore to provide similarity search over the images.
by Jean-Marie Rizkallah
🧩 Jamf Patch Summary to Slack Stay on top of software patch compliance by automatically posting Jamf patch summaries to Slack. This helps IT and security teams quickly identify outdated installs and take action—without logging into Jamf. ✅ Prerequisites • A Jamf Pro API key with permissions to read software titles and patch summary • A Slack app or incoming webhook URL with permission to post messages to your desired channel 🔍 How it works • Manually trigger the flow or Add a webhook • Fetch a list of software titles from Jamf Pro • Filter to select the software you're tracking (e.g. Chrome, Edge) • Retrieve the patch summary for that software (latest version, up-to-date, out-of-date counts) • Format the summary into Slack Block Kit • Post the formatted summary into a Slack channel ⚙️ Set up steps • Takes ~5–10 minutes to configure • Set your server BaseURL variable in the Set Node • Add your Jamf Pro API credentials in the HTTP Request nodes (Get & Retrieve) • Set the target software ID in the Filter node • Add your Slack webhook URL or token in the final HTTP node • Optional: Adjust Slack formatting inside the Function node
by Arthur Braghetto
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Clean Web Content Extraction with Anti-Bot Fallback Extract clean and structured text from any webpage with optional fallback to an anti-bot scraping service. Ideal for AI tools and content workflows. 🧠 How it Works This sub-workflow enables reliable and clean scraping of any public webpage by simply passing a url parameter. It is designed to be embedded into other workflows or used as a tool for AI agents. It supports two output modes: fulltext:* true — returns *{ title, text } with full page content fulltext:* false — returns *{ title, url, content } with a short excerpt 💡 If the site is protected by anti-bot systems (like Cloudflare), it will automatically fallback to Scrape.do, a scraping API with a generous free plan. 🧩 This template requires the n8n-nodes-webpage-content-extractor community node, so it only works in self-hosted n8n environments. 🚀 Use Cases As a reusable sub-workflow, via Execute Sub-workflow node. As a tool for an AI Agent, compatible with Call n8n Workflow Tool. Perfect for chatbots, summarization workflows, or RSS/feed enrichment. Empowers your AI Agent with the ability to browse and extract readable content from websites automatically. 🔖 Parameters url (string): the webpage URL to scrape fulltext (boolean): set true for full page content, false for summarized output ⚙️ Setup Install the community node n8n-nodes-webpage-content-extractor in your self-hosted n8n instance. Create a free account at Scrape.do and obtain your API Token. In the workflow, locate the Scrape.do HTTP Request node and configure the credentials using your API Token. Detailed step-by-step instructions are available in the workflow notes. The Scrape.do API is only used as a fallback when conventional scraping fails, helping you preserve your API credits.
by scrapeless official
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Prerequisites A n8n account (free trial available) A Scrapeless account and API key A Google account to access Google Sheets 🛠️ Step-by-Step Setup 1. Create a New Workflow in n8n Start by creating a new workflow in n8n. Add a Manual Trigger node to begin. 2. Add the Scrapeless Node Add the Scrapeless node and choose the Scrape operation Paste in your API key Set your target website URL Execute the node to fetch data and verify results 3. Clean Up the Data Add a Code node to clean and format the scraped data. Focus on extracting key fields like: Title Description URL 4. Set Up Google Sheets Create a new spreadsheet in Google Sheets Name the sheet (e.g., Business Leads) Add columns like Title, Description, and URL 5. Connect Google Sheets in n8n Add the Google Sheets node Choose the operation Append or update row Select the spreadsheet and worksheet Manually map each column to the cleaned data fields 6. Run and Test the Workflow Click "Execute Workflow" in n8n Check your Google Sheet to confirm the data is properly inserted Results With this automated workflow, you can continuously extract business lead data, clean it, and push it directly into a spreadsheet — perfect for outbound sales, lead lists, or internal analytics. How to Use ⚙️ Open the Variables node and plug in your Scrapeless credentials. 📄 Confirm the Google Sheets node points to your desired spreadsheet. ▶️ Run the workflow manually from the Start node. Perfect For: Sales teams doing outbound prospecting Marketers building lead lists Agencies running data aggregation tasks
by Mark Shcherbakov
Video Guide I prepared a detailed guide that showed the whole process of building a resume analyzer. Who is this for? This workflow is ideal for developers, data analysts, and business owners who want to enable conversational interactions with their database. It’s particularly useful for cases where users need to extract, analyze, or aggregate data without writing SQL queries manually. What problem does this workflow solve? Accessing and analyzing database data often requires SQL expertise or dedicated reports, which can be time-consuming. This workflow empowers users to interact with a database conversationally through an AI-powered agent. It dynamically generates SQL queries based on user requests, streamlining data retrieval and analysis. What this workflow does This workflow integrates OpenAI with a Supabase database, enabling users to interact with their data via an AI agent. The agent can: Retrieve records from the database. Extract and analyze JSON data stored in tables. Provide summaries, aggregations, or specific data points based on user queries. Dynamic SQL Querying: The agent uses user prompts to create and execute SQL queries on the database. Understand JSON Structure: The workflow identifies JSON schema from sample records, enabling the agent to parse and analyze JSON fields effectively. Database Schema Exploration: It provides the agent with tools to retrieve table structures, column details, and relationships for precise query generation. Setup Preparation Create Accounts: N8N: For workflow automation. Supabase: For database hosting and management. OpenAI: For building the conversational AI agent. Configure Database Connection: Set up a PostgreSQL database in Supabase. Use appropriate credentials (username, password, host, and database name) in your workflow. N8N Workflow AI agent with tools: Code Tool: Execute SQL queries based on user input. Database Schema Tool: Retrieve a list of all tables in the database. Use a predefined SQL query to fetch table definitions, including column names, types, and references. Table Definition: Retrieve a list of columns with types for one table.
by Jimleuk
This n8n template leverages n8n's multi-form feature to build a 2 part job application submission journey which aims to eliminate the need for applicants to re-enter data found on their CVs/Resumes. How it works The application submission process starts with an n8n form trigger to accept CV files in the form of PDFs. The PDF is validated using the text classifier node to determine if it is a valid CV else the applicant is asked to reupload. A basic LLM node is used to extract relevant information from the CV as data capture. A copy of the original job post is included to ensure relevancy. Applicant's data is then sent to an ATS for processing. For our demo, we used airtable because we could attach PDFs to rows. Finally, a second form trigger is used for the actual application form. However, it is prefilled to save the applicant's time and allow them to amend any of the generated application fields. How to use Ensure to change the redirect URL in the form ending node to use the host domain of your n8n instance. Requirements OpenAI for LLM Airtable to capture applicant data Customising the workflow Application form is pretty basic for this demonstration but could be extended to ask more in-depth questions. If it fits the job, why not ask applicants to upload portfolio works and have AI describe/caption them.
by Adam Janes
This workflow demonstrates a simple way to run evals on a set of test cases stored in a Google Sheet. The example we are using comes from an info extraction task dataset, where we tested 6 different LLMs on 18 different test cases. This workflow extends the functionality of my simple eval for benchmarking legal tasks here. Rather than running executions sequentially (waiting for each one to respond before making another request), we use parallel processing to fire 2 requests every second. You can see our sample data in this spreadsheet here to get started. Once you have this working for our dataset, you can plug in your own test cases matching different LLMs to see how it works with your own data. How it works Pull our test cases from Google Sheets. For each case, fire off an HTTP request to a webhook. That webhook grabs the relevant source file from Google Drive and converts it to text. The text gets sent to an LLM via Open Router (so we can easily swap out models). Results come back and are logged in Google Sheets. Set up steps: Add your credentials for Google Sheets, Google Drive, and OpenRouter. Make a copy of the original data spreadsheet so that you can edit it yourself. You will need to plug your version in the Update Results node to see the spreadsheet update on each run of the loop.