by Lucas Peyrin
How it works This workflow converts an HTML string into a polished PDF file using the powerful open-source Gotenberg service. It's designed to be a reusable utility in your automation stack. Receives Input: The workflow is triggered with a JSON object containing the full html code as a string and a desired file_name for the output. Prepares File: It converts the incoming HTML string into a binary index.html file, which is required for the API call. Calls Gotenberg API: It sends the HTML file to a running Gotenberg instance via an HTTP request. It also dynamically sets the output filename and embeds metadata (like Author, Title, and Creation Date) directly into the PDF. Returns PDF: The workflow outputs the final binary PDF file, ready to be saved, sent in an email, or used in the next step of your main workflow. Set up steps Setup time: ~3 minutes This workflow has one critical prerequisite: a running Gotenberg instance that your n8n can connect to. 1. Prerequisite: Run Gotenberg You need to have the Gotenberg service running. The easiest way is with Docker. Add the following service to your docker-compose.yml file (the same one you use for n8n): services: ... your n8n service ... gotenberg: image: gotenberg/gotenberg:8 restart: always Then, restart your stack with docker compose up -d. This makes Gotenberg available at the address http://gotenberg:3000 from within your n8n container. 2. Use as a Sub-Workflow This workflow is ready to be used as a sub-workflow. In your main workflow, add an Execute Sub-Workflow node. In the Workflow parameter, select this "Create PDF from HTML" workflow. Provide the input data in the required format: a JSON object with html and file_name keys.
by VKAPS IT
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🎯 How it works This workflow captures new lead information from a web form, enriches it with Apollo.io data, qualifies the lead using AI, and—if the lead is strong—automatically sends a personalized outreach email via Gmail and logs the result in Google Sheets. 🛠️ Key Features 📩 Lead form capture with validation 🔍 Enrichment via Apollo API 🤖 Lead scoring using AI (LangChain + Groq) 📧 Dynamic email generation & sending via Gmail 📊 Logging leads with job title & org into Google Sheets ✅ Conditional email sending (score ≥ 6 only) 🧪 Set up steps Estimated time: 15–20 minutes Add your Apollo API Key to the HTTP Header credential (never hardcode!) Connect your Gmail account for sending emails Connect your Google Sheets account and set up the correct spreadsheet & sheet name Enable LangChain/Groq credentials for lead scoring and AI-generated emails Update the form endpoint to your live webhook if needed 📌 Sticky Notes Add the following mandatory sticky notes inside your workflow: FormTrigger Node: "Collects lead info via form. Ensure your form is connected to this endpoint." HTTP Request Node: "Enrich lead using Apollo.io API. Add your API key via header-based authentication." AI Agent (Lead Score): "Scores lead from 1-10 based on job title and industry match. Only leads with score ≥ 6 proceed." AI Agent (Email Composer): "Generates a concise, polite email using lead’s job title & company. Modify tone if needed." Google Sheets Append: "Logs enriched lead with job title, org, and LinkedIn URL. Customize sheet structure if needed." Gmail Node: "Sends personalized outreach email if lead passes score threshold. Uses AI-generated content." 💸 Free or Paid? Free – No paid API services are required (Apollo has a free tier).
by Jimleuk
This template is for Self-Hosted N8N Instances only. This n8n demonstrates how to build a simple SQLite MCP server to perform local database operations as well as use it for Business Intelligence. This MCP example is based off an official MCP reference implementation which can be found here -https://github.com/modelcontextprotocol/servers/tree/main/src/sqlite How it works A MCP server trigger is used and connected to 5 tools: 2 Code Node and 3 Custom Workflow. The 2 Code Node tools use the SQLLite3 library and are simple read-only queries and as such, the Code Node tool can be simply used. The 3 custom workflow tools are used for select, insert and update queries as these are operations which require a bit more discretion. Whilst it may be easier to allow the agent to use raw SQL queries, we may find it a little safer to just allow for the parameters instead. The custom workflow tool allows us to define this restricted schema for tool input which we'll use to construct the SQL statement ourselves. All 3 custom workflow tools trigger the same "Execute workflow" trigger in this very template which has a switch to route the operation to the correct handler. Finally, we use our Code nodes to handle select, insert and update operations. The responses are then sent back to the the MCP client. How to use This SQLite MCP server allows any compatible MCP client to manage a SQLite database by supporting select, create and update operations. You will need to have a SQLite database available before you can use this server. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Please create a table to store business insights and add the following..." "what business insights do we have on current retail trends?" "Who has contributed the most business insights in the past week?" Requirements SQLite for database. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If the scope of schemas or tables is too open, try restrict it so the MCP serves a specific purpose for business operations. eg. Confine the querying and editing to HR only tables before providing access to people in that department. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by Jimleuk
This n8n demonstrates how to build your own Github MCP server to personalise it to your organisation's repositories, issues and pull requests. This n8n implementation, though not as fully featured as the official MCP server offered by Github, allows you to control precisely what access and/or functionality is granted to users which can make MCP use simpler and in some cases, more secure. The use-case in this template is to simply view and comment on issues within a specific repository but can be extended to meet the needs of your team. This MCP example is based off an official MCP reference implementation which can be found here https://github.com/modelcontextprotocol/servers/tree/main/src/github How it works A MCP server trigger is used and connected to 3 custom workflow tools. We're using custom workflow tools as there is quite a few nodes required for each task. Behind these tools are regular Github nodes although preconfigured with credentials and targeted repository. The "Get Issue Comments" and "Create Issue Comment" tools depend on obtaining an Issue Number first. The agent should call the "Get Latest Issues" tool for this. How to use This Github MCP server allows any compatible MCP client to view and comment on Github Issues. You will need to have a Github account and repository access available before you can use this server. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Can you get me the latest issues about MCP?" "What is the current progress on Issue 12345?" "Please can you add a comment to Issue 12345 that they should try installing the latest version and see if that works?" Requirements Github for account and repository access. The repository need not be your own but you'll still need to ensure you have the correct permissions. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow Extend this template to interactive with pull requests or workflows within your own company's Github repositories. Alternatively, pull in metrics and generate reports for programme managers. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by Josh Universe
How the sequence works: A "Schedule Trigger" node activates the automation at a defined schedule. An "Airtable" node will search for previously posted questions in your question database. Airtable Base Template: here An "Aggregate" node will take all the questions from Airtable and compress them to a single output. ChatGPT, or a model of your choice, will generate a discussion question based on the options in the system prompt. The discussion question will be posted to the subreddit of your choice by the "Reddit" node. You can choose between a text, image, or link post! The recently-posted discussion question will then be uploaded to your Airtable base using the "Airtable" node. This will be used to prevent ChatGPT from creating duplicate questions. Setup Steps The setup process will take about 5 minutes. An Airtable base template is also pre-made for you here: https://airtable.com/app6wzQqegKIJOiOg/shrzy7L9yv8BFRQdY Set the recurrence in the "Schedule" node Create an Airtable account, you can use the link here. Get an Airtable personal access token here. Configure the "Get Previous Discussions" Airtable node. Configure the options in brackets in the "Generate New Discussion" node. Set the desired subreddit to post to and the post type(text, image, or link) in the "Post Discussion" node. Configure the "Create Archived Discussion" node to map to the Airtable base(required) and specific subreddit(optional).
by Jimleuk
This n8n workflow demonstrates how to create a really simple yet effective customer support channel and pipeline by combining Slack, Linear and AI tools. Built on n8n's ability to integrate anything, this workflow is intended for small support teams who want to maximise re-use of the tools they already have with an interface which is doesn't require any onboarding. Read the blog post here: https://blog.n8n.io/automated-customer-support-tickets-with-n8n-slack-linear-and-ai/ How it works The workflow is connected to a slack channel setup with the customer to capture support issues. Only messages which are tagged with a "✅" reaction are captured by the workflow. Messages are tagged by the support team in the channel. Each captured support issue is sent to the AI model to classify, prioritise and rewrite into a support ticket. The generated support ticket is uploaded to Linear for the support team to investigate and track. Support team is able to report back to the user via the channel when issue is fixed. Requirements Slack channel to be monitored Linear account and project Customising this workflow Don't have Linear? This workflow can work just as well with traditional ticketing systems like JIRA.
by Guillaume Duvernay
This template provides a fully automated system for monitoring news on any topic you choose. It leverages Linkup's AI-powered web search to find recent, relevant articles, extracts key information like the title, date, and summary, and then neatly organizes everything in an Airtable base. Stop manually searching for updates and let this workflow deliver a curated news digest directly to your own database, complete with a Slack notification to let you know when it's done. This is the perfect solution for staying informed without the repetitive work. Who is this for? Marketing & PR professionals:** Keep a close eye on industry trends, competitor mentions, and brand sentiment. Analysts & researchers:** Effortlessly gather source material and data points on specific research topics. Business owners & entrepreneurs:** Stay updated on market shifts, new technologies, and potential opportunities without dedicating hours to reading. Anyone with a passion project:** Easily follow developments in your favorite hobby, field of study, or area of interest. What problem does this solve? Eliminates manual searching:** Frees you from the daily or weekly grind of searching multiple news sites for relevant articles. Centralizes information:** Consolidates all relevant news into a single, organized, and easily accessible Airtable database. Provides structured data:** Instead of just a list of links, it extracts and formats key information (title, summary, URL, date) for each article, ready for review or analysis. Keeps you proactively informed:** The automated Slack notification ensures you know exactly when new information is ready, closing the loop on your monitoring process. How it works Schedule: The workflow runs automatically based on a schedule you set (the default is weekly). Define topics: In the Set news parameters node, you specify the topics you want to monitor and the time frame (e.g., news from the last 7 days). AI web search: The Query Linkup for news node sends your topics to Linkup's API. Linkup's AI searches the web for relevant news articles and returns a structured list containing each article's title, URL, summary, and publication date. Store in Airtable: The workflow loops through each article found and creates a new record for it in your Airtable base. Notify on Slack: Once all the news has been stored, a final notification is sent to a Slack channel of your choice, letting you know the process is complete and how many articles were found. Setup Configure the trigger: Adjust the Schedule Trigger node to set the frequency and time you want the workflow to run. Set your topics: In the Set news parameters node, replace the example topics with your own keywords and define the news freshness that you'd like to set. Connect your accounts: Linkup: Add your Linkup API key in the Query Linkup for news node. Linkup's free plan includes €5 of credits monthly, enough for about 1,000 runs of this workflow. Airtable: In the Store one news node, select your Airtable account, then choose the Base and Table where you want to save the news. Slack: In the Notify in Slack node, select your Slack account and the channel where you want to receive notifications. Activate the workflow: Toggle the workflow to "Active", and your automated news monitoring system is live! Taking it further Change your database:* Don't use Airtable? Easily swap the *Airtable* node for a *Notion, **Google Sheets, or any other database node to store your news. Customize notifications:* Replace the *Slack* node with a *Discord, **Telegram, or Email node to get alerts on your preferred platform. Add AI analysis:** Insert an AI node after the Linkup search to perform sentiment analysis on the news summaries, categorize articles, or generate a high-level overview before saving them.
by Yang
Who is this for? This workflow is for content creators, social media managers, marketing teams, and virtual assistants who want to automatically repurpose YouTube videos into ready-to-post social media content. If you need to quickly turn long-form videos into short posts for platforms like Instagram, Facebook, or LinkedIn, this workflow saves you hours of manual work. What problem is this workflow solving? Manually extracting ideas from YouTube videos, writing captions, creating images, and preparing social media posts takes a lot of time and effort. This workflow automates the entire process: it reads the video, generates posts with captions and AI images, and organizes everything into Airtable. It lets you focus more on growing your audience instead of spending hours repurposing content. What this workflow does Watches a YouTube channel RSS feed for new videos. Extracts the video transcript automatically using Dumpling AI. Summarizes and transforms the transcript into 3 social media captions (Instagram, Facebook, LinkedIn) using OpenAI. Generates 3 unique AI image prompts. Sends the prompts to Dumpling AI to create realistic social media images. Saves the captions and attaches the AI images into Airtable, ready for posting. Setup RSS Feed Setup Get your YouTube channel’s RSS feed URL. Insert the URL into the RSS Trigger node. This will monitor for new YouTube uploads automatically. Dumpling AI Setup for Transcript Extraction Sign up at Dumpling AI. Get your Dumpling AI API Key. In the first HTTP Request node after the RSS trigger, insert your API Key (use HTTP Header Authentication). This sends the YouTube URL to Dumpling AI’s /extract-transcript endpoint. OpenAI Setup for Caption and Prompt Generation Get your OpenAI API Key. In the OpenAI node, connect your account. The AI will: Generate 3 platform-specific captions. Generate 3 creative prompts to design images related to the video. Edit Fields Node This node organizes the generated captions and prompts into separate fields for easy Airtable mapping. Captions are split for Instagram, Facebook, and LinkedIn. Dumpling AI Setup for AI Image Generation After the Edit Fields node, the second HTTP Request node sends the image prompt to Dumpling AI’s /generate-image endpoint. This returns a realistic AI-generated image. Airtable Setup for Saving Posts (Without Image First) Create a new base in Airtable with the following fields: Platform (Single select: Instagram, Facebook, LinkedIn) Content (Long text field) Image (Attachment field) Connect your Airtable Personal Access Token to the Airtable node. The Airtable node saves the generated captions into separate records, initially without images. Upload Generated Images Back to Airtable The third HTTP Request node PATCHES the Airtable record. It updates the Image field with the generated AI image from Dumpling AI. Credentials Required Dumpling AI API Key (for transcript extraction and AI image generation) OpenAI API Key (for caption and prompt creation) Airtable Personal Access Token (for inserting and updating records) How to customize this workflow to your needs Change the OpenAI prompt to generate captions in your brand tone (e.g., friendly, professional, witty). Modify the image prompts to match your design style better. Adjust the Airtable base fields if you want to add more platforms or content formats. Add scheduling tools like Buffer or Metricool to automatically post from Airtable. ⚡ Quick Tips Make sure Dumpling AI credits are active to allow transcript and image generation. Set Airtable permissions properly so PATCH requests can update attachments.
by darrell_tw
How it works Fetch all workflows from your n8n instance. Filter workflows that contain nodes with a modelId setting. Extract the node names, model IDs, model names, workflow names, and workflow URLs. Save the extracted information into a connected Google Sheet. Set up steps Connect your n8n API credentials. Connect your Google Sheets account. Replace "Your n8n domain" with your actual domain URL. Use this Google Sheet template to create a new sheet for results. Setup typically takes 5 minutes. Be cautious: if you have over 100 workflows, performance may be impacted. Notes Sticky notes inside the workflow provide extra guidance. This workflow clears old sheet data before writing new results. Make sure your n8n instance allows API access. Result Example Update: It didn't detect the AI model in tool originally. Now it's fixed! Update 20250429: Support 1.91.0 with open node directly! Optimize the url with node id.
by Jimleuk
This n8n workflow demonstrates an approach to parsing bank statement PDFs with multimodal LLMs as an alternative to traditional OCR. This allows for much more accurate data extraction from the document especially when it comes to tables and complex layouts. Multimodal Parsing is better than traditiona OCR because: It reduces complexity and overhead by avoiding the need to preprocess the document into text format such as markdown before passing to the LLM. It handles non-standard PDF formats which may produce garbled output via traditional OCR text conversion. It's orders of magnitude cheaper than premium OCR models that still require post-processing cleanup and formatting. LLMs can format to any schema or language you desire! How it works You can use the example bank statement created specifically for this workflow here: https://drive.google.com/file/d/1wS9U7MQDthj57CvEcqG_Llkr-ek6RqGA/view?usp=sharing A PDF bank statement is imported via Google Drive. For this demo, I've created a mock bank statement which includes complex table layouts of 5 columns. Typically, OCR will be unable to align the columns correctly and mistake some deposits for withdrawals. Because multimodal LLMs do not accept PDFs directly, well have to convert the PDF to a series of images. We can achieve this by using a tool such as Stirling PDF. Stirling PDF is self-hostable which is handy for sensitive data such as bank statements. Stirling PDF will return our PDF as a series of JPGs (one for each page) in a zipped file. We can use n8n's decompress node to extract the images and ensure they are ordered by using the Sort node. Next, we'll resize each page using the Edit Image node to ensure the right balance between resolution limits and processing speed. Each resized page image is then passed into the Basic LLM node which will use our multimodal LLM of choice - Gemini 1.5 Pro. In the LLM node's options, we'll add a "user message" of type binary (data) which is how we add our image data as an input. Our prompt will instruct the multimodal LLM to transcribe each page to markdown. Note, you do not need to do this - you can just ask for data points to extract directly! Our goal for this template is to demonstrate the LLMs ability to accurately read the page. Finally, with our markdown version of all pages, we can pass this to another LLM node to extract required data such as deposit line items. Requirements Google Gemini API for Multimodal LLM. Google Drive access for document storage. Stirling PDF instance for PDF to Image conversion Customising the workflow At time of writing, Gemini 1.5 Pro is the most accurate in text document parsing with a relatively low cost. If you are not using Google Gemini however you can switch to other multimodal LLMs such as OpenAI GPT or Antrophic Claude. If you don't need the markdown, simply asking what to extract directly in the LLM's prompt is also acceptable and would save a few extra steps. Not parsing any bank statements any time soon? This template also works for Invoices, inventory lists, contracts, legal documents etc.
by MattF
This workflow helps SEO teams catch top movers in Google Search Console by comparing daily performance across keyword segments like brand, nonbrand, and content categories. Instead of serving as a routine check, it highlights the queries and pages with the biggest jumps or drops, making it ideal for spotting wins, losses, or unexpected shifts early. How It Works Runs daily on a scheduled trigger (e.g. every morning). Pulls GSC data for the prior two days (e.g. yesterday vs. day before). Segments traffic by keyword type or URL pattern (e.g. brand, nonbrand, recipes, blogs, etc.). Calculates changes in clicks, impressions, CTR, and average position. Flags top movers with the biggest positive or negative deltas. Sends structured reports via Slack or email, grouped by segment and sorted by impact. Setup Steps Connect your Google Search Console account and optionally Gmail or Slack. Swap in your own domain(s) and customize segmentation logic (e.g. brand terms, path filters). By default, the workflow includes Slack alerts, but these can be easily switched to or combined with email, webhook, or other channels. Full setup takes around 15–20 minutes with working GSC credentials. Note: The “recipes” segment is included as an example of how to segment content. This can be changed to match blog, FAQ, product pages, or any other category.
by Mohan Gopal
Personalized Tour Package Recommendations via n8n + Pinecone + Lovable UI I've created an intelligent Travel Itinerary Planner that connects a Lovable front-end UI with a smart backend powered by n8n, Pinecone, and OpenAI to deliver personalized tour packages based on natural language queries. What It Does Users type in their travel destination and duration (e.g., "Paris 5 days trip" or "Bali Trip for 7 Days, would love water sports, adventures and trekking included, also some historical monuments") through a Lovable UI. This triggers a webhook in n8n, which processes the request, searches vectorized tour data in Pinecone, and generates a personalized itinerary using OpenAI’s GPT. The results are then structured and sent back to the frontend UI for display in an interactive, reorderable format. Workflow Architecture Lovable UI ➝ Webhook ➝ Tour Recommendation Agent ➝ Vector Search ➝ OpenAI Response ➝ Structured Output ➝ Response to Lovable Tools & Components Used Webhook Acts as the entry point between the Lovable frontend and n8n. Captures the user query (destination, duration) and forwards it into the workflow. OpenAI Chat Model To interpret the user query. To generate a user-friendly, structured tour package from the matched results. Simple Memory Keeps chat state and context for follow-up queries (extendable for future features like multi-step planning or saved itineraries). Question Answering with Vector Store Searches vector embeddings of pre-loaded tour data. Finds the most relevant tour packages by comparing query embeddings. Pinecone Vector Store Stores tour packages and activity data in vectorized format. Enables fast and scalable semantic search across destinations, themes (e.g., "adventure", "cultural"), and duration. OpenAI Embeddings Embeds all tour and activity documents stored in Pinecone. Converts input user queries into embedding vectors for semantic search. Structured Output Parser Parses the final OpenAI-generated response into a consistent, frontend-consumable JSON format. Frontend (Lovable UI) User types in destination or their travel package needs in the Tour Search. Lovable queries the n8n workflow. Displays beautifully structured, editable itineraries. How to Set It Up Webhook Setup in n8n Create a POST webhook node. Set Webhook URL and connect it with Lovable frontend. Pinecone & Embeddings Convert your static tour package documents (PDFs, JSON, CSV, etc.) into embeddings using OpenAI. Store the embeddings in a Pinecone namespace (e.g., kuala-lumpur-3-days). Configure “Answer with Vector Store” Tool Connect the tool to your Pinecone instance and pass query embedding for matching. Connect to OpenAI Chat Use the GPT model to process query + context from Pinecone to generate an engaging itinerary description. Optionally chain a second model to format it into UI-consumable output. Output Parser & Return Use Structured Output Parser to parse the response and pass it to Respond to Webhook node for UI display. Ideal Use Cases Smart itinerary planning for OTAs or DMCs Personalized travel recommendations in chatbots or apps Travel advisors and agents automating package generation Benefits Highly relevant, contextual travel suggestions Natural query understanding via OpenAI Seamless frontend-backend integration via Webhook If you’re building personalized experiences for travelers using AI, give this approach a try! Let me know if you’d like the JSON for this workflow or help setting up the Pinecone data pipeline.