by Mohan Gopal
This workflow automates the process of reading EDI files generated by Sabre, parsing them using an AI Agent, and producing structured accounting reports like: 📌 Accounts Receivable (AR) Summary 📌 Tax and Surcharges Report It also uses Retrieval-Augmented Generation (RAG) to vectorize the Sabre Interface User Record (IUR)—a 154-page technical document—so that the AI agent can reference it when clarification is required while generating reports. ⚙️ Tools & Integrations Used Component:Tool/Service:Purpose:Workflow Engine:n8n:Automation & orchestration LLM Model:OpenAI GPT-4 / Chat Model:Natural language understanding and parsing Embeddings Model:OpenAI Embeddings:Convert text into semantic vector format Vector Database:Pinecone:Store and retrieve document chunks semantically Storage:Google Drive:Source of raw EDI text files and PDF documentation DataLoader + Splitter:n8n Node + Recursive Splitter:Loads and prepares documents for embedding AI Agents:n8n AI Agent Node:Runs context-aware prompts and parses reports 🧱 Workflow Breakdown 🧠 1. Vectorizing the Sabre IUR Document (RAG Setup) 📘 Objective: Enable the AI Agent to refer to the IUR document (154 pages) for detailed explanations of EDI terms, formats, and rules. Flow Steps: Google Drive Search + Download – Find and pull the IUR PDF file. Default Data Loader – Load the file and preprocess it for semantic splitting. Recursive Character Splitter – Break down large pages into meaningful chunks. OpenAI Embeddings – Vectorize each chunk. Pinecone Vector Store – Save into a Pinecone namespace for future retrieval. ✅ Result: The IUR is now searchable via semantic queries from the AI Agent. 📁 2. Reading and Extracting Data from EDI Files 📘 Objective: Parse raw EDI files for financial records and summaries. Flow Steps: Trigger – Manual or scheduled execution of the workflow. Google Drive Search – Finds all new .edi or .txt files. Download File Contents – Loads content of each file into memory. Extract from File – Raw text extraction. 📊 3. Report Generation Using AI Agents 📘 Objective: AI Agents parse the extracted data to generate structured accounting reports. a. Accounts Receivable Report Agent The extracted text is passed to an AI Agent. Model is connected to: OpenAI Chat Model (LLM) Pinecone Vector DB (IUR reference) Outputs a structured AR Summary Report. b. Tax and Surcharges Report Agent Same steps as above. Prompts adjusted to extract tax, fees, surcharges, and amounts. ✅ Output Format: Can be mapped to columns and inserted into a Google Sheet or exported as a CSV/JSON. 📑 Sample Reports You Can Build Already implemented: ✅ Accounts Receivable (AR) Summary Report ✅ Tax and Surcharges Report Can be extended to: Accounts Payable (AP) Passenger Revenue Daily Sales Commission Report Net Profit Margin (if supplier cost + commission is available) 💡 Key Advantages ✅ No-code automation with n8n ✅ Semantic reasoning using AI + Vector DB (RAG) ✅ Can work with various Sabre outputs without manual parsing ✅ Modular: Easy to add new report types ✅ Cloud-integrated (Drive, Pinecone, OpenAI) 🧪 Potential Improvements Area Suggestions Testing Add a “Preview” step to validate extracted data before writing Scalability Batch mode + Google Sheet batching for multiple reports Audit Trail Log every file name, timestamp, report type in a Google Sheet Notification Send Slack/Email when a new report is generated Multi-model support Add Claude/Gemini fallback if OpenAI usage limit is hit
by InfyOm Technologies
What problem does this workflow solve? Capture leads from your website, enrich them via Apollo, store them in HubSpot, send a personalized thank-you email, and notify your team—all automatically. What does this workflow do? Capture leads from website forms automatically. Send a personalized thank-you email to each new lead. Enrich lead data using Apollo for deeper insights. Create or update a contact/lead in HubSpot CRM. Notify the internal team via email about the new lead. Setup Gmail Setup Create Gmail Credentials by creating a project in Google Cloud Console. Hubspot Setup Create Hubspot Credentials with App Token. How it Works: This workflow automates your lead capture, enrichment, and follow-up process to ensure no opportunity is missed. Here's how it works: 1. Website Form Submission A visitor submits a lead form on your website. Lead details like name, email, company, and message are captured instantly. 2. Personalized Thank-You Email A customized thank-you email is automatically sent to the lead, building trust and confirming receipt of their inquiry. 3. Apollo Lead Enrichment The captured data is enriched using Apollo to fetch additional information like job title, LinkedIn profile, and other details. This helps you better understand and qualify your leads. 4. Create Lead in HubSpot The enriched lead information is pushed into HubSpot as a new contact or lead. Duplicate checks ensure that there are no repeat entries. 5. Internal Notification An email notification with enriched lead details is sent to your team. Your team can follow up immediately with a complete profile. Who can use it? This workflow can be used by any website owner with a "Get In Touch" or "Contact Us" button.
by Yaron Been
Automated solution to extract and organize contact information from Upwork job postings, enabling direct outreach to potential clients who post jobs matching your expertise. 🚀 What It Does Scrapes job postings for contact information Extracts email addresses and social profiles Organizes leads in a structured format Enables direct outreach campaigns Tracks response rates 🎯 Perfect For Freelancers looking to expand their client base Agencies targeting specific industries Sales professionals in the gig economy Recruiters sourcing clients Digital marketing agencies ⚙️ Key Benefits ✅ Access to hidden contact information ✅ Expand your client base ✅ Beat the competition to opportunities ✅ Targeted outreach campaigns ✅ Higher response rates 🔧 What You Need Upwork account n8n instance Email service (for outreach) CRM (optional) 📊 Features Email pattern detection Social media profile extraction Company website discovery Lead scoring system Outreach tracking 🛠️ Setup & Support Quick Setup Start collecting leads in 20 minutes with our step-by-step guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Take control of your freelance career with direct access to potential clients. Transform how you find and secure projects on Upwork.
by Jimleuk
This n8n workflow demonstrates a simple multi-agent setup to perform the task of competitor research. It showcases how using the HTTP request tool could reduce the number of nodes needed to achieve a workflow like this. How it works For this template, a source company is defined by the user which is sent to Exa.ai to find competitors. Each competitor is then funnelled through 3 AI agents that will go out onto the internet and retrieve specific datapoints about the competitor; company overview, product offering and customer reviews. Once the agents are finished, the results are compiled into a report which is then inserted in a notion database. Check out an example output here: https://jimleuk.notion.site/2d1c3c726e8e42f3aecec6338fd24333?v=de020fa196f34cdeb676daaeae44e110&pvs=4 Requirements An OpenAI account for the LLM. Exa.ai account for access to their AI search engine. SerpAPI account for Google search. Firecrawl.dev account for webscraping. Notion.com account for database to save final reports. Customising the workflow Add additional agents to gather more datapoints such as SEO keywords and metrics. Not using notion? Feel free to swap this out for your own database.
by Matthieu
Search LinkedIn companies, Score with AI and add them to Google Sheet CRM Setup Video: https://youtube.com/watch?v=m904RNxtF0w&t Who is this for? This template is ideal for sales teams, business development professionals, and marketers looking to build a targeted prospect database with automatic qualification. Perfect for agencies, consultants, and B2B companies wanting to identify and prioritize the most promising potential clients. What problem does this workflow solve? Manually researching companies on LinkedIn, evaluating their fit for your services, and tracking them in your CRM is time-consuming and subjective. This automation streamlines lead generation by automatically finding, scoring, and importing qualified prospects into your database. What this workflow does This workflow automatically searches for companies on LinkedIn based on your criteria, retrieves detailed information about each company, filters them based on quality indicators, uses AI to score how well they match your ideal customer profile, and adds them to your Google Sheet CRM while preventing duplicates. Setup Create a Ghost Genius API account and get your API key Configure HTTP Request nodes with Header Auth credentials Create a copy of the provided Google Sheet template Set up your Google Sheet and OpenAI credentials following n8n documentation Customize the "Set Variables" node to match your target audience and scoring criteria How to customize this workflow Modify search parameters to target different industries, locations, or company sizes Adjust the follower count threshold based on your qualification criteria Customize the AI scoring system to align with your specific product or service offering Add notification nodes to alert you when high-scoring companies are identified
by Jimleuk
This n8n template is one of a 3-part series exploring use-cases for clustering vector embeddings: Survey Insights Customer Insights Community Insights This template demonstrates the Customer Insights scenario where Trustpilot reviews can be quickly grouped by similarity and an AI agent can generate insights on those groupings. With this workflow, marketers can save days and even weeks of work breaking down their own or competitor reviews and identify frequently mentioned positives and negatives. Sample Output: https://docs.google.com/spreadsheets/d/e/2PACX-1vQ6ipJnXWXgr5wlUJnhioNpeYrxaIpsRYZCwN3C-fFXumkbh9TAsA_JzE0kbv7DcGAVIP7az0L46_2P/pubhtml How it works Trustpilot reviews are scraped for a particular company using the HTTP request node. Reviews are then inserted into a Qdrant collection carefully tagged with the question and Trustpilot metadata. Reviews are fetched and put through a clustering algorithm using the Python Code node. The Qdrant points are returned in clustered groups. Each group is looped to fetch the payloads of the points and feed them to the AI agent to summarise and generate insights for. The resulting insights and raw responses are then saved to the Google Spreadsheet for further analysis by the marketer. Requirements Qdrant Vectorstore for storing embeddings. OpenAI account for embeddings and LLM. Customising the Template Adjust clustering parameters which make sense for your data. Consider expanding date range of reviews for insights over common intervals: 3mth, 6mth and YTD.
by Ayoub
Who is this for? This workflow is ideal for developers, content creators, or customer support teams looking to automate text-to-speech conversion using OpenAI. What problem does this solve? It automates the process of converting text inputs into speech, reducing manual effort and enhancing productivity. What this workflow does: This workflow triggers when a text input is received via a webhook, converts it into audio using the OpenAI API, and sends the generated speech back through a webhook response. Setup: Ensure you have an OpenAI API key (you can get it from OpenAI website). Set up the webhook URL and parameters. Configure the OpenAI node with your API key (Create New Credentials). set up the responde to webhook node.
by Artem Boiko
How it works This template automates the conversion of CAD and BIM files Revit, AutoCAD, IFC, MicroStation (e.g. .rvt, .ifc, .dwg, .dgn) into structured Excel databases and lightweight 3D geometry .dae files using the DataDrivenConstruction open-source converter. 📦 High-level steps: Set file paths and converter path in the Set node Trigger conversion via Execute Command (runs .exe converter offline) Output includes .xlsx (data) and .dae (3D model) files Includes sticky note instructions for troubleshooting and GitHub repo info Set up steps 🕒 Setup time: ~10 minutes You’ll need: Windows machine (offline or airgapped OK) Path to the converter .exe file Path to a sample .rvt (or .ifc, .dwg, .dgn) file 🧷 Setup paths in the Set node: path_to_converter = "C:\\...\\RvtExporter.exe" path_project_file = "C:\\...\\project.rvt" Docs & Issues: Full Readme on GitHub
by Behram
Automated n8n workflow: Receives videos via form, dubs/translates them to the selected languages, and—upon completion—uploads them to multiple social media channels and cloud drives, including Box, Dropbox, and YouTube, Telegram, Postiz (Facebook, Instagram, Tiktok, Reddit etc.) Workflows Via n8n form select files to dub for desired languages. Listen webhook and whenever dubbing finishes upload to desired platforms Used Stacks DubLab App (ApiKey, Webhook Setup Required) Optional (Upload) Telegram (Token Required) Box (Oauth2 Required) Dropbox (Oauth2 Required) Youtube (Oauth2 Required) Postiz (ApiKey Required)
by Belgacem Dhiflaoui
What Problem Does This Solve? 🛠️ This workflow automates the process of extracting information from a Google Doc, storing it in a Pinecone vector database, and using it to personalize and send emails based on user input via chat. It eliminates the manual steps of gathering recipient data, writing messages, and dispatching emails providing a fully automated, intelligent communication system. Perfect for teams that need to: Maintain dynamic contact lists Personalize bulk or contextual email outreach Use chat interfaces to trigger intelligent email actions Target Audience: Sales teams, marketing departments, HR staff, startup founders, or anyone looking to automate AI-powered communication workflows. What Does It Do? 🌟 Extracts content from a Google Docs document (e.g., a list of contacts or structured notes) Splits, embeds, and stores that content in Pinecone for semantic search Listens for incoming chat messages using n8n's chatTrigger Uses LangChain agents with OpenAI to: Search Pinecone for contextually relevant information (e.g., email addresses) Compose personalized emails based on instructions Sends emails using the Gmail API, triggered dynamically from the AI output Key Features 📋 Google Docs integration for live document data Embedding & vector search with Pinecone for AI lookups Custom LangChain agents with tool calling logic (search + send) Full support for OpenAI models (GPT-4o) Personalized email generation with dynamic name and message filling Modular design: plug-and-play with other tools like CRMs, Notion, etc. Setup Instructions Prerequisites n8n Instance:** Self-hosted or cloud instance Google Docs Account:** For reading input content Pinecone Account:** For storing document data semantically OpenAI Account:** For generating embeddings and messages Gmail Account:** With Gmail OAuth2 credentials for sending emails Installation Steps 📦 1. Import the Workflow Import the provided JSON files into your n8n instance. 2.Configure Credentials Go to n8n > Credentials, and set up: Google Docs API** OpenAI API** Pinecone API** Gmail OAuth2** 3. Set Your Pinecone Index & Namespace Ensure you have a working Pinecone index (e.g., n8ndocs) and namespace (e.g., docsmail). 4. Test the Full Flow Run the Google Docs → Pinecone embedding workflow to prepare data. Send a message to the chatTrigger endpoint (e.g., "Send an offer to User"). Check the execution log to verify correct tool usage and Gmail delivery. How It Works 🔍 1. Data Preparation: Google Doc content is fetched and chunked. OpenAI embeddings are created. Data is stored in Pinecone under a specific namespace. 2. Chat Trigger: A webhook captures chat input. The LangChain agent interprets the user request. The agent uses two tools: Vectorstore_mails: Retrieves relevant emails via Pinecone vector search send_mail: Uses an internal n8n sub-workflow to send Gmail messages 3. Mail Generation & Delivery: Email is personalized using recipient info (name/email from Pinecone) Message follows a clean, friendly format with clear subject and closing Delivered via Gmail integration
by David Ashby
Complete MCP server exposing all LoneScale Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every LoneScale Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n LoneScale Tool tool with full error handling 📋 Available Operations (2 total) Every possible LoneScale Tool operation is included: 📝 List (1 operations) • Create a list 🔧 Item (1 operations) • Create a item 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native LoneScale Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every LoneScale Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Jimleuk
This n8n workflow is a proof-of-concept template exploring how we might work with multimodal LLMs and their multi-image analysis capabilities. In this demo, we compare 2 screenshots of a webpage taken at different timestamps and pass both to our multimodal LLM for a visual comparison of differences. Handling multiple binary inputs (ie. images) in an AI request is supported by n8n's basic LLM node. How it works This template is intended to run as 2 parts: first to generate the base screenshots and next to run the visual regression test which captures fresh screenshots. Starting with a list of webpages captured in a Google sheet, base screenshots are captured for each using a external web scraping service called Apify.com (I prefer Apify but feel free to use whichever web scraping service available to you) These base screenshots are uploaded to Google Drive and will be referenced later when we run our testing. Phase 2 of the workflow, we'll use a scheduled trigger to fire sometime in the future which will reuse our web scraping service to generate fresh screenshots of our desired webpages. Next, re-download our base screenshots in parallel and with both old and new captures, we'll pass these to our LLM node. In the LLM node's options, we'll define 2 "user message" inputs with the type of binary (data) for our images. Finally, we'll prompt our LLM with our testing criteria and capture the regressions detected. Note, results will vary depending on which LLM you use. A final report can be generated using the LLM's output and is uploaded to Linear. Requirements Apify.com API key for web screenshotting service Google Drive and Sheets access to store list of webpages and captures Customising this workflow Have your own preferred web screenshotting service? Feel free to swap out Apify with your service of choice. If the web screenshot is too large, it may prove difficult for the LLM to spot differences with precision. Try splitting up captures into smaller images instead.