by InfraNodus
Set up a chat with your documents without the complex vector store setup. This templates helps you ingest** your PDF / text / MD documents into a knowledge graph use the graph as the knowledge base for your AI chatbots (and other workflows) visualize the main topics* and *gaps** in your documents (good for observability and research) The knowledge base is provided using the InfraNodus GraphRAG with the knowledge graphs offering high-quality responses without the need to set up complex RAG vector store workflows. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up and update** — no complex data import workflows needed A knowledge graph offers a holistic and interactive view of your knowledge base (accessible via our API or a web interface — also shareable) Better retrieval of relations** between the document chunks = higher quality responses How it works This template uses the InfraNodus knowledge graph as a knowledge base for your n8n AI agent node. The knowledge graph contains the documents you can upload using this template from your Google Drive. When the user asks a question via the chat interface, the agent forwards this question to the InfraNodus knowledge graph, retrieves a response, a summary, and a list of matching statements (based advanced Graph RAG), then delivers the final response back the user. Here's a description step by step: Step 1: Upload your documents Put the PDF / text / MD files you want to chat with into a folder on your Google drive Authorize access to that folder using the Google drive node in the template. Add the InfraNodus API key to the InfraNodus Save to Graph HTTP node Optional: change the name of the graph you want to save the data to in the InfraNodus HTTP node (in the name field of the HTTP post request). Run the workflow to ingest all the files and save them into the graph Optional: check the link provided in the Step 1 workflow description to see the visualization of your knowledge base. It will look something like that: Note:* you can replace the PDF to Text convertor node with a better quality *PDF convertor* from ConvertAPI which respects the original file layout and doesn't split text into small chunks Step 2: Chat with your documents Deactive the trigger in the Step 1 Activate the chat trigger in the Step 2 Add your InfraNodus API credentials to Knowledge Base GraphRAG InfraNodus node Optional: change the graph name in the Knowledge Base node to match the name you provided in the step 1 above Run the chat and ask the question Watch the magic How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key A Google Drive OAuth access (follow the n8n instructions) Optional: ConvertAPI API key for better quality PDF conversion Customizing this workflow You can customize this workflow by adding several experts to your AI agent. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20174217658396-Using-InfraNodus-Knowledge-Graphs-as-Experts-for-AI-Chatbot-Agents-in-n8n Also check out the video tutorial with a demo: For support and feedback, please, contact us at https://support.noduslabs.com To learn more about InfraNodus: https://infranodus.com
by Yannick
🚀 How it works (Fonctionnement résumé) : Ce template permet de transformer un document (PDF, TXT, DocX...) en post LinkedIn engageant, prêt à être publié ou validé par email, le tout avec l’aide d’une IA spécialisée en copywriting LinkedIn. Voici les étapes clés : Formulaire de dépôt : L'utilisateur charge un fichier ou colle un texte. Détection du type de contenu : Un Switch analyse le type de fichier (PDF, DOCX, TXT, ou texte brut). Attention pour DocX nécessite un compte Make pour transformer le doc (mais cela fonctionne aussi sans docX) Extraction du contenu : Selon le format, le bon module d'extraction est utilisé. Génération d’un post LinkedIn : L'IA transforme le contenu en post LinkedIn selon une méthodologie de copywriting optimisée. Validation par email : Un email est envoyé à l’utilisateur pour approbation avec possibilité d’ajouter une image. Publication automatique : Si l'utilisateur valide, le post est publié sur LinkedIn. ⚙️ Setup Steps : Connecte tes comptes : Google Docs OAuth LinkedIn OAuth OpenAI (via gpt-4.1-mini ou un autre modèle) SMTP + IMAP pour l'envoi et la lecture d'emails Configure les champs du formulaire dans le nœud Form Trigger selon ton usage. Personnalise le prompt IA dans le nœud AI Agent si tu veux adapter le ton ou la méthodologie. Vérifie les emails dans le nœud d'envoi (Send Email) et de lecture (Email Trigger (IMAP)), pour que la validation fonctionne. Teste le workflow avec différents fichiers pour t'assurer que tous les types sont bien traités (PDF, DOCX, TXT, etc.). 🧩 Cas d’usage typiques : Créer des posts à partir de notes de réunion ou de rapports. Valoriser un article ou une publication professionnelle sous forme de contenu LinkedIn. Déléguer à l'IA le premier jet de ton contenu réseau. Bonus surveille une newsletter de ta messagerie pour proposer un post pertinent sur LinkedIn (vous pouvez supprimer il fonctionne en parallèle)
by Niklas Hatje
Who this is for This template is for everyone that wants to download their n8n Cloud invoices automatically as a PDF instead of downloading them manually. How it works This workflow checks your Gmail inbox for new n8n invoice emails from n8n's payment provider Paddle. Once it finds something, it converts the URL into a PDF using pdflayer and saves it in Google Drive. Setup Setup your Gmail and Google Drive credentials Create a free account at https://pdflayer.com/ Insert your pdflayer API key into the Setup node Insert the URL to the wanted drive folder into the setup node (make sure to remove everything after the ?) How to adjust it to your need Instead of saving the PDF in Google drive, you could also save it in your local system, any other storage provider or send the PDF automatically to the right person in your company.
by Inga Kruger
GBP Exchange Rate Email Workflow Sends email with table that have GBP exchange rate for few money value every day, if user clicks the button to run it. How it works: Data come from API Changed into HTML table Before is send inside email
by Aymeric Besset
> 🛠️ Note: This workflow uses a custom Mastodon API request. Ensure your server supports bookmark access, and that your access token has the right permissions. OAuth or token-based credentials must be configured. 🧑💼 Who is this for? This workflow is ideal for digital researchers, social media users, and knowledge workers who want to automatically archive Mastodon bookmarks into their Raindrop.io collection for future reference and tagging. 🔧 What problem is this solving? Mastodon users often bookmark posts they want to read or save for later, but there's no native integration to archive them outside the app. This workflow solves that by syncing bookmarked posts from Mastodon to Raindrop, making them more accessible, organized, and searchable long-term. ⚙️ What this workflow does Triggers on schedule (or manually). Tracks the latest fetched min_id using workflow static data to avoid duplicates. Sends an HTTP GET request to the Mastodon bookmarks API, using bearer token authentication. Validates and processes the bookmarks if new entries exist. Parses pagination metadata (e.g. min_id) from response headers. Splits response array to handle individual bookmarks. Filters out entries with missing data. Saves each post to Raindrop.io, using its title and URL. Use the card URL if exist. Updates the min_id to remember where it left off. 🚀 Setup Create a Mastodon access token with access to bookmarks. Add a credential in n8n of type HTTP Bearer Auth with your token. Create and connect a Raindrop OAuth2 credential. Replace {VOTRE SERVEUR MASTODON} with your Mastodon server's base URL. (Optional) Adjust the scheduling interval under the "Schedule Trigger" node. Make sure the Raindrop collection ID is correct or leave it as default (-1) as this is the index for the `Unsorted` collection. 🧪 How to customize this workflow To save to a specific Raindrop collection, change the collectionId in both Raindrop nodes. You can extend the Code node to pull additional metadata like author, hashtags, or content excerpts. Add an Email or Slack node after Raindrop to notify you of saved bookmarks.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? The Search Engine Intelligence Extractor is a powerful n8n automation that leverages Bright Data’s MCP based AI Agents to simulate human-like searches across Google, Bing, and Yandex, and then distills clean, structured insights using Google Gemini. This workflow is tailored for: SEO analysts researching competitors or market trends Market researchers needing real-time search visibility Journalists & content writers gathering contextual insights AI developers creating intelligent assistants Digital marketers tracking brand mentions or news What problem is this workflow solving? Traditional scraping of search engines is often blocked, cluttered, or filled with irrelevant information. Manually analyzing and cleaning this data for insight is time-consuming. This workflow solves the problem by: Simulating real user search behavior via Bright Data MCP based AI Agent Performing multi-platform search (Google, Bing, Yandex) in one unified flow Extracting clean, human-readable results (stripping ads, navigation, etc.) Structuring the content using Google Gemini LLM Automating delivery via Webhook or saving to disk What this workflow does Input Fields Node: Accepts the search query Accepts action for example - Perform a google search. Replace the action with bing, yandex etc. for other search providers Accepts Webhook notification URL Bright Data MCP Agent Execution: Triggers Bright Data’s intelligent search agent Handles search navigation, result loading, pagination Human Readable Data Extractor: Cleanses HTML, removes ads, footers, irrelevant links Produces a readable narrative of results Final Output Handling: Saves the processed response to disk Sends the structured data to a Webhook for real-time use Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Add Scheduled Execution Add a Cron trigger to run this workflow on a set schedule (e.g., daily/weekly keyword tracking). Push Results to Custom Destinations Connect output to: Google Sheets (for analytics or dashboards) PostgreSQL or MySQL DBs (for structured storage) Notion or Airtable (for content pipelines) Slack or Email (for alerting teams) Customize Webhook Notifications Update the Webhook URL in the notification node to push processed results to external APIs, CRMs, or real-time dashboards.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? This workflow template enables intelligent data extraction from ProductHunt using Bright Data’s Model Context Protocol (MCP) and processes search results with Google Gemini. This workflow is designed for individuals and teams who need automated, intelligent discovery and analysis of new tech products. It's especially valuable for: Startup Analysts & VC Researchers Growth Hackers & Marketers Recruiters & Tech Scouts Product Managers & Innovation Teams AI & Automation Enthusiasts What problem is this workflow solving? Traditional product discovery on ProductHunt is constrained by limited descriptions and requires repeated manual validation through web searches. Manually extracting and enriching this data is slow, repetitive, and error-prone. This workflow solves the problem by: Extracting real-time ProductHunt data using Bright Data’s MCP infrastructure to mimic real-user behavior and avoid blocks. Performing contextual searches on Google for a specific product on ProductHunt to gather use cases, reviews, and related information. Structuring results using Google Gemini LLM to provide human-readable insights and reduce noise. Delivering results seamlessly by saving output to disk, updating Google Sheets, and sending Webhook alerts. What this workflow does Input Field Node Define the ProductHunt category with the search term(s) you want to target. This is used to drive extraction and search operations. Agent Operation Node The agent performs two major tasks: Extract from ProductHunt Retrieves trending products from ProductHunt using Bright Data MCP Contextual Google Search for the product the agent searches Google for deeper context, including: Reviews Competitor mentions Real-world usage examples LLM Node (Google Gemini) Analyzes and summarizes extracted web content Removes noise (ads, menus, etc.) Structures content into bullet points, insights, or JSON objects Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs This workflow is flexible and modular, allowing you to adapt it for various research, product discovery, or trend analysis use cases. Below are the key customization points and how to modify them. Define Your Target Products or Topics: Change the input parameter to a specific ProductHunt category, tag, or keyword (e.g., "AI tools", "SaaS", "DevOps") Change Output Destinations : Save to Disk**: Change the file format (.json, .csv, .md) or directory path Google Sheet**: Modify sheet name, structure (columns like Product, Summary, Link) Webhook Notification**: Point to a Slack/Discord/CRM/Webhook URL with payload mapping
by Vijayeta Sinel
Automate document translation and ensure translation accuracy using Straker Verify, Google Drive and Slack. **How it works? ** A workflow step is set up to "watch" a Google Drive folder. When your team members place new files in this folder, they are downloaded. Straker Verify then translates them and provides a quality score. Once Straker Verify has completed this, the job info is fetched, the translation is saved to an output folder and you are notified via Slack. What problem does this solve? When using AI to translate documents, you have no idea about the quality and accuracy of the output. This template answers the question “How good is my translation?” so you have a high level of confidence before you publish. Who is this for? This workflow template is designed for businesses needing translation and localization of documents such as text docs, presentations, web pages, transcripts, video subtitles and others. Use it to build workflows that localize your content at scale while maintaining translation quality, accuracy and compliance. Set up instructions Straker Verify Integration with n8n **Connect to Straker Verify: Obtain Your API Key Sign Up/Log In:** Visit https://verify.straker.ai/ to create an account or log in. Navigate to API Keys: Go to `Verify → Settings → API Keys. Copy Your Key: Find and copy your API key. Add Key to n8n: In n8n, go to Settings → Credentials → Straker Verify. Set Up Credentials for the Straker Verify Node Open n8n and go to "Credentials". From the left sidebar, click on "Credentials". Search for "Straker Verify" and select "Straker Verify API" from the dropdown. Paste the copied access token from the Verify app and save it. Assign Credentials to the Node 1.Go to your workflow and open the Straker Verify node. 2.Select the "Straker Verify API" credentials you just created from the dropdown (Credentials to connect with), and save. ✅ You are now ready to use the workflow. Workflow Steps Step 1: Initiate Workflow – Upload Files to Google Drive 1.Upload one or more files to the designated Google Drive folder. This triggers the "New File in Google Drive" node in n8n. Step 2: Verify Token Balance The workflow checks your token balance via: Get Current User Balance User Has Enough Tokens Not enough tokens? You will receive a Slack message: Not enough tokens Please top up and re-upload your files. Step 3: Select Workflow The system fetches available workflows via: Fetch All Users Workflows No matching workflow found? You'll be notified via Slack: No workflow found Step 4: Create Project in Straker Verify The following steps are handled automatically: Fetch language/project options Download files from Google Drive Create a new project using: Create Straker Project ✅ You'll receive a Slack confirmation: > "Project created – ID: xxxxxx" Step 5: Process Completion & File Return Once files are processed by Straker: The Incoming Translation Result webhook is triggered. The workflow: Downloads processed files: Get File Content from Strakerh - Uploads them to Google Drive: Upload File to Google Drivee Step 6: Confirmation - Workflow Complete You'll receive a final Slack message: > "Workflow complete – Files are now available in Google Drive"
by explorium
Explorium Event-Triggered Outreach This n8n and agent-based workflow automates outbound prospecting by monitoring Explorium event data (e.g. product launches, new office opening, new investment and more), researching companies, identifying key contacts, and generating tailored sales emails leveraging the Explorium MCP server. Template Workflow Overview Node 1: Webhook Trigger Purpose: Listens for real-time product launch events pushed from Explorium's webhook system. How it works: Explorium sends HTTP POST requests containing event data The webhook payload includes company name, business ID, domain, product name, and event type Pay attention: Product launch is just one example. You can easily enroll to many more meaningful events. to learn about events and how to enroll to events, visit the events documentation. Node 2: Company Research Agent Agent Type: Tools Agent Purpose: Enrich company data after an event occurs. How it works: Uses Explorium MCP via the MCP Client tool to gather additional company data Uses Anthropic Claude (Chat Model) to process and interpret company information for downstream personalization Node 3: Employee Data Retrieval Purpose: Retrieve prospect-level data for targeting. How it works: Uses HTTP Request node to call Explorium's fetch_prospects endpoint Filters prospects by: Company business_id Departments: Product, R&D, etc... Seniority levels: owner, cxo, vp, director, senior, manager, partner, etc... Pay Attention: Follow our fetch prospect documentation for the full list of filter and best practice. Limits results to top 5 relevant employees Code nodes handle: Filtering logic Cleaning API response Formatting data for downstream agents Node 4: Conditional Branch - Prospect Data Check If Node: Checks whether prospect data was successfully retrieved Logic: If prospects found → personalized emails per person If no prospects → fallback to company-level general email Node 5A: Email Writer #1 (No Prospect Data) Agent Type: Tools Agent Purpose: Write generic outbound email using only company-level research and event info. Powered by: Anthropic Chat Model Node 5B: Loop Over Prospects → Email Writer #2 (Personalized) Agent Type: Tools Agent Purpose: Write highly personalized email for each identified employee. How it works: Loops through each individual prospect Passes company research + employee data to LLM agent Generates customized emails referencing: Prospect's title & department Product launch Role-relevant Explorium value proposition Node 6: Slack Notifications Purpose: Posts completed emails to internal Slack channel for review or testing before final deployment. Future State: Can be swapped with an email sequencing platform in production. Setup Requirements Explorium API Access MCP Client credentials for company enrichment and prospect fetching Registered webhook for event listening Get explorium api key n8n Configuration Secure environment variables for API keys & webhook secret Code nodes configured for JSON transformation, filtering & signature validation Customization Options Personalization Logic Update LLM prompt instructions to reflect ICP priorities Modify email templates based on role, department, or tenure logic Adjust fallback behavior when prospect data is unavailable API Request Tuning Adjust page_size for number of prospects retrieved Fine-tune seniority and department filters to match evolving targeting Future Expansion Swap Slack notifications for outbound email automation Integrate call task assignment directly into CRM Introduce engagement scoring feedback loop (opens, clicks, replies) Troubleshooting Tips Validate webhook signature matching to prevent unauthorized requests Ensure correct business_id is passed to prospect fetching endpoint Confirm business enrichment returns sufficient data for company researcher agents Review agent LLM responses for correct output structure and parsing consistency
by Lucas Peyrin
How it works This workflow converts an HTML string into a polished PDF file using the powerful open-source Gotenberg service. It's designed to be a reusable utility in your automation stack. Receives Input: The workflow is triggered with a JSON object containing the full html code as a string and a desired file_name for the output. Prepares File: It converts the incoming HTML string into a binary index.html file, which is required for the API call. Calls Gotenberg API: It sends the HTML file to a running Gotenberg instance via an HTTP request. It also dynamically sets the output filename and embeds metadata (like Author, Title, and Creation Date) directly into the PDF. Returns PDF: The workflow outputs the final binary PDF file, ready to be saved, sent in an email, or used in the next step of your main workflow. Set up steps Setup time: ~3 minutes This workflow has one critical prerequisite: a running Gotenberg instance that your n8n can connect to. 1. Prerequisite: Run Gotenberg You need to have the Gotenberg service running. The easiest way is with Docker. Add the following service to your docker-compose.yml file (the same one you use for n8n): services: ... your n8n service ... gotenberg: image: gotenberg/gotenberg:8 restart: always Then, restart your stack with docker compose up -d. This makes Gotenberg available at the address http://gotenberg:3000 from within your n8n container. 2. Use as a Sub-Workflow This workflow is ready to be used as a sub-workflow. In your main workflow, add an Execute Sub-Workflow node. In the Workflow parameter, select this "Create PDF from HTML" workflow. Provide the input data in the required format: a JSON object with html and file_name keys.
by Ibrahim Malick
⚠️ This template uses only official n8n nodes. No community nodes required. 🧑💼 Who is this for? This workflow is designed for: Legal tech founders Marketing freelancers or consultants Agencies supporting lawyers and small law firms Anyone doing outbound outreach in the legal niche ❓ What problem is this solving? LinkedIn is a goldmine for targeting legal professionals — but scraping and personalizing outreach is tedious and expensive. Most tools either: Require paid LinkedIn Sales Navigator Can’t personalize at scale Violate LinkedIn’s TOS This workflow solves that by using free Google Search, OpenRouter AI, and GPT-4o to find, enrich, and message up to 1,000 solo lawyers per day — without using browser automation or scrapers. ⚙️ What this workflow does Uses Google Programmable Search to find solo lawyers and small firm founders on LinkedIn Parses each profile’s name, title, profile URL, and snippet Saves raw lead data to Google Sheets Uses OpenRouter Sonar Pro to enrich each profile with external content Generates a personalized, 1-line message using GPT-4o Appends the final message into Google Sheets for outreach 🛠️ Setup Estimated time: 15–20 minutes ✅ Google Programmable Search Enable the Custom Search API on Google Cloud Create a programmable search engine set to search the full web Copy your API key and CX ID ✅ Google Sheets Create a sheet with columns: Name, Title, Profile URL, Outreach Message Share the sheet with your OAuth-connected Google account ✅ OpenRouter Sign up at openrouter.ai Fund with at least $5 and generate your API key Use the model perplexity/sonar-pro for real-time research ✅ GPT-4o (optional) You can use your OpenAI key or route GPT-4o via OpenRouter All setup-specific values are marked clearly in sticky notes and placeholders. 🛠️ How to customize this workflow to your needs Change the Google search query to match your industry (e.g., "founder" AND "therapist" site:linkedin.com/in) Modify the AI prompt to match your tone (formal, casual, humorous) Connect the final output to your CRM (like HubSpot, Airtable, etc.) Add a second outreach message variant to A/B test performance 📌 Sticky Notes & Annotations All nodes are clearly renamed for understandability (e.g., Find Lawyer Profiles, Parse LinkedIn Search Results) Color-coded sticky notes explain: Setup instructions Required credentials Use case 🗂 Category AI Sales Marketing
by Jay Hartley
What this template does This workflow will collect order data as it is produced, then send a summary email of all orders at the end of every day, formatted in a table. It receives new orders via webhook and stores in Airtable. At 7PM every day, it sends a summary email with the day's orders in a HTML table Setup: Instructions Video Create a new table in Airtable and give it a field time with type date, orderID with type number, and orderPrice also with type number. Create a new access token if you haven't already at https://airtable.com/create/tokens/new. Make sure to give the token the scopes data.records:read, data.records:write, schema.bases:read and access to whichever table you choose to store the orders. A pop-up window appears with the token. Use this token to make Create New Credential > Access Token for Airtable in the Store Order and Airtable Get Today's Orders nodes. Create access credentials for your Gmail as described here: https://developers.google.com/workspace/guides/create-credentials. Use the credentials from your client_secret.json in the Send to Gmail node. In the Store Order node, change Base and Table to the database and table in your Airtable account you wish to use to store orders. Make sure to use these same values in the Airtable Get Today's Orders node. Every time an order is created in your system, send a POST request to Webhook from your order software. Each request must contain a single order containing fields 'orderID' and 'orderPrice' (or, edit Set Order Fields to select which incoming fields you wish to save) Change the schedule time for sending email from Everyday at 7PM to whichever time you choose. Test: Activate the workflow. From the node Webhook, copy Production URL Send the following CURL request to the URL given to you: curl -X POST -H "Content-Type: application/json" -d '{"orderID": 12345, "orderPrice": 99.99}' YOUR_URL_HERE It should say Node executed successfully. Now check your Airtable and confirm the order was stored in the right place.