by Bela
Purpose of the workflow Most scraping workflows get blocked by anti-bot technologies. To avoid this, you can use Scrappey to scrape every website you want. How it works: We use Test Data and make a API Call to the Scrappey service. We get the scraped website data back as a result. Setup Steps: Replace YOUR_API_KEY in the "Scrappey API Call" node with your Scrappey API Key (Register For Free) Replace the test data with your production data. You can plug in any type of data connector at this point of your workflow.
by Mohan Gopal
This workflow automates the process of reading EDI files generated by Sabre, parsing them using an AI Agent, and producing structured accounting reports like: 📌 Accounts Receivable (AR) Summary 📌 Tax and Surcharges Report It also uses Retrieval-Augmented Generation (RAG) to vectorize the Sabre Interface User Record (IUR)—a 154-page technical document—so that the AI agent can reference it when clarification is required while generating reports. ⚙️ Tools & Integrations Used Component:Tool/Service:Purpose:Workflow Engine:n8n:Automation & orchestration LLM Model:OpenAI GPT-4 / Chat Model:Natural language understanding and parsing Embeddings Model:OpenAI Embeddings:Convert text into semantic vector format Vector Database:Pinecone:Store and retrieve document chunks semantically Storage:Google Drive:Source of raw EDI text files and PDF documentation DataLoader + Splitter:n8n Node + Recursive Splitter:Loads and prepares documents for embedding AI Agents:n8n AI Agent Node:Runs context-aware prompts and parses reports 🧱 Workflow Breakdown 🧠 1. Vectorizing the Sabre IUR Document (RAG Setup) 📘 Objective: Enable the AI Agent to refer to the IUR document (154 pages) for detailed explanations of EDI terms, formats, and rules. Flow Steps: Google Drive Search + Download – Find and pull the IUR PDF file. Default Data Loader – Load the file and preprocess it for semantic splitting. Recursive Character Splitter – Break down large pages into meaningful chunks. OpenAI Embeddings – Vectorize each chunk. Pinecone Vector Store – Save into a Pinecone namespace for future retrieval. ✅ Result: The IUR is now searchable via semantic queries from the AI Agent. 📁 2. Reading and Extracting Data from EDI Files 📘 Objective: Parse raw EDI files for financial records and summaries. Flow Steps: Trigger – Manual or scheduled execution of the workflow. Google Drive Search – Finds all new .edi or .txt files. Download File Contents – Loads content of each file into memory. Extract from File – Raw text extraction. 📊 3. Report Generation Using AI Agents 📘 Objective: AI Agents parse the extracted data to generate structured accounting reports. a. Accounts Receivable Report Agent The extracted text is passed to an AI Agent. Model is connected to: OpenAI Chat Model (LLM) Pinecone Vector DB (IUR reference) Outputs a structured AR Summary Report. b. Tax and Surcharges Report Agent Same steps as above. Prompts adjusted to extract tax, fees, surcharges, and amounts. ✅ Output Format: Can be mapped to columns and inserted into a Google Sheet or exported as a CSV/JSON. 📑 Sample Reports You Can Build Already implemented: ✅ Accounts Receivable (AR) Summary Report ✅ Tax and Surcharges Report Can be extended to: Accounts Payable (AP) Passenger Revenue Daily Sales Commission Report Net Profit Margin (if supplier cost + commission is available) 💡 Key Advantages ✅ No-code automation with n8n ✅ Semantic reasoning using AI + Vector DB (RAG) ✅ Can work with various Sabre outputs without manual parsing ✅ Modular: Easy to add new report types ✅ Cloud-integrated (Drive, Pinecone, OpenAI) 🧪 Potential Improvements Area Suggestions Testing Add a “Preview” step to validate extracted data before writing Scalability Batch mode + Google Sheet batching for multiple reports Audit Trail Log every file name, timestamp, report type in a Google Sheet Notification Send Slack/Email when a new report is generated Multi-model support Add Claude/Gemini fallback if OpenAI usage limit is hit
by Yaron Been
Automated solution to extract and organize contact information from Upwork job postings, enabling direct outreach to potential clients who post jobs matching your expertise. 🚀 What It Does Scrapes job postings for contact information Extracts email addresses and social profiles Organizes leads in a structured format Enables direct outreach campaigns Tracks response rates 🎯 Perfect For Freelancers looking to expand their client base Agencies targeting specific industries Sales professionals in the gig economy Recruiters sourcing clients Digital marketing agencies ⚙️ Key Benefits ✅ Access to hidden contact information ✅ Expand your client base ✅ Beat the competition to opportunities ✅ Targeted outreach campaigns ✅ Higher response rates 🔧 What You Need Upwork account n8n instance Email service (for outreach) CRM (optional) 📊 Features Email pattern detection Social media profile extraction Company website discovery Lead scoring system Outreach tracking 🛠️ Setup & Support Quick Setup Start collecting leads in 20 minutes with our step-by-step guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Take control of your freelance career with direct access to potential clients. Transform how you find and secure projects on Upwork.
by Jimleuk
This n8n workflow demonstrates a simple multi-agent setup to perform the task of competitor research. It showcases how using the HTTP request tool could reduce the number of nodes needed to achieve a workflow like this. How it works For this template, a source company is defined by the user which is sent to Exa.ai to find competitors. Each competitor is then funnelled through 3 AI agents that will go out onto the internet and retrieve specific datapoints about the competitor; company overview, product offering and customer reviews. Once the agents are finished, the results are compiled into a report which is then inserted in a notion database. Check out an example output here: https://jimleuk.notion.site/2d1c3c726e8e42f3aecec6338fd24333?v=de020fa196f34cdeb676daaeae44e110&pvs=4 Requirements An OpenAI account for the LLM. Exa.ai account for access to their AI search engine. SerpAPI account for Google search. Firecrawl.dev account for webscraping. Notion.com account for database to save final reports. Customising the workflow Add additional agents to gather more datapoints such as SEO keywords and metrics. Not using notion? Feel free to swap this out for your own database.
by Jimleuk
This n8n template is one of a 3-part series exploring use-cases for clustering vector embeddings: Survey Insights Customer Insights Community Insights This template demonstrates the Customer Insights scenario where Trustpilot reviews can be quickly grouped by similarity and an AI agent can generate insights on those groupings. With this workflow, marketers can save days and even weeks of work breaking down their own or competitor reviews and identify frequently mentioned positives and negatives. Sample Output: https://docs.google.com/spreadsheets/d/e/2PACX-1vQ6ipJnXWXgr5wlUJnhioNpeYrxaIpsRYZCwN3C-fFXumkbh9TAsA_JzE0kbv7DcGAVIP7az0L46_2P/pubhtml How it works Trustpilot reviews are scraped for a particular company using the HTTP request node. Reviews are then inserted into a Qdrant collection carefully tagged with the question and Trustpilot metadata. Reviews are fetched and put through a clustering algorithm using the Python Code node. The Qdrant points are returned in clustered groups. Each group is looped to fetch the payloads of the points and feed them to the AI agent to summarise and generate insights for. The resulting insights and raw responses are then saved to the Google Spreadsheet for further analysis by the marketer. Requirements Qdrant Vectorstore for storing embeddings. OpenAI account for embeddings and LLM. Customising the Template Adjust clustering parameters which make sense for your data. Consider expanding date range of reviews for insights over common intervals: 3mth, 6mth and YTD.
by Artem Boiko
How it works This template automates the conversion of CAD and BIM files Revit, AutoCAD, IFC, MicroStation (e.g. .rvt, .ifc, .dwg, .dgn) into structured Excel databases and lightweight 3D geometry .dae files using the DataDrivenConstruction open-source converter. 📦 High-level steps: Set file paths and converter path in the Set node Trigger conversion via Execute Command (runs .exe converter offline) Output includes .xlsx (data) and .dae (3D model) files Includes sticky note instructions for troubleshooting and GitHub repo info Set up steps 🕒 Setup time: ~10 minutes You’ll need: Windows machine (offline or airgapped OK) Path to the converter .exe file Path to a sample .rvt (or .ifc, .dwg, .dgn) file 🧷 Setup paths in the Set node: path_to_converter = "C:\\...\\RvtExporter.exe" path_project_file = "C:\\...\\project.rvt" Docs & Issues: Full Readme on GitHub
by David Ashby
Complete MCP server exposing all LoneScale Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every LoneScale Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n LoneScale Tool tool with full error handling 📋 Available Operations (2 total) Every possible LoneScale Tool operation is included: 📝 List (1 operations) • Create a list 🔧 Item (1 operations) • Create a item 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native LoneScale Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every LoneScale Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing all Hacker News Tool operations to AI agents. Zero configuration needed - all 3 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Hacker News Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Hacker News Tool tool with full error handling 📋 Available Operations (3 total) Every possible Hacker News Tool operation is included: 🔧 All (1 operations) • Get many items 🔧 Article (1 operations) • Get an article 👤 User (1 operations) • Get a user 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Hacker News Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Hacker News Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Lucas Peyrin
How it works This workflow demonstrates how to use n8n to serve a complete, styled HTML webpage. It acts as a mini web server, responding to browser requests with your custom HTML content. Webhook Trigger: The workflow starts with a Webhook node configured to listen for GET requests on a specific path. When you visit this node's Production URL in a browser, it triggers the workflow. Respond with HTML: The Respond to Webhook node is configured to send a response back to the browser. Content-Type Header: It sets a crucial response header, Content-Type: text/html, which tells the browser to render the response as a webpage, not just plain text. HTML Body: The entire HTML, CSS, and JavaScript for the webpage is pasted directly into the Body field of this node. When activated, visiting the webhook URL will instantly display the custom webpage. Set up steps Setup time: < 1 minute This workflow is ready to use out-of-the-box. Activate the workflow. Open the Your WebPage (Webhook) node and copy its Production URL. Paste the URL into your browser to see the live tutorial page. To use your own HTML, simply open the Site (Respond to Webhook) node and replace the content in the Body field with your own code.
by Yulia
This workflow is a modification of the previous template on how to create an SQL agent with LangChain and SQLite. The key difference – the agent has access only to the database schema, not to the actual data. To achieve this, SQL queries are made outside the AI Agent node, and the results are never passed back to the agent. This approach allows the agent to generate SQL queries based on the structure of tables and their relationships, without having to access the actual data. This makes the process more secure and efficient, especially in cases where data confidentiality is crucial. 🚀 Setup To get started with this workflow, you’ll need to set up a free MySQL server and import your database (check Step 1 and 2 in this tutorial). Of course, you can switch MySQL to another SQL database such as PostgreSQL, the principle remains the same. The key is to download the schema once and save it locally to avoid repeated remote connections. Run the top part of the workflow once to download and store the MySQL chinook database schema file on the server. With this approach, we avoid the need to repeatedly connect to a remote db4free database and fetch the schema every time. As a result, we reach greater processing speed and efficiency. 🗣️ Chat with your data Start a chat: send a message in the chat window. The workflow loads the locally saved MySQL database schema, without having the ability to touch the actual data. The file contains the full structure of your MySQL database for analysis. The Langchain AI Agent receives the schema, your input and begins to work. The AI Agent generates SQL queries and brief comments based solely on the schema and the user’s message. An IF node checks whether the AI Agent has generated a query. When: Yes: the AI Agent passes the SQL query to the next MySQL node for execution. No: You get a direct answer from the Agent without further action. The workflow formats the results of the SQL query, ensuring they are convenient to read and easy to understand. Once formatted, you get both the Agent answer and the query result in the chat window. 🌟 Example queries Try these sample queries to see the schema-driven AI Agent in action: Would you please list me all customers from Germany? What are the music genres in the database? What tables are available in the database? Please describe the relationships between tables. - In this example, the AI Agent does not need to create the SQL query. And if you prefer to keep the data private, you can manually execute the generated SQL query in your own environment using any database client or tool you trust 🗄️ 💭 The AI Agent memory node does not store the actual data as we run SQL-queries outside the agent. It contains the database schema, user questions and the initial Agent reply. Actual SQL query results are passed to the chat window, but the values are not stored in the Agent memory.
by Miko
The workflow performs tasks that would normally require human intervention on Google News links, transforming the RSS feeds into data that can be used by an automated system like n8n, thus creating a solid foundation for further applications Who is this for? This workflow is ideal for developers, journalists, and content aggregators who need to extract and clean Google News URLs from its RSS feed. What problem does this workflow solve? Google News RSS provides encoded URLs that contain additional tracking parameters. This workflow decodes those URLs and provides clean, direct links to news articles, making them easier to process, share, and analyze. What this workflow does Fetch Google News RSS – Retrieves news articles from Google News based on predefined parameters (language, country). Limit results – Reduces the number of requests to avoid excessive API usage. Extract encoded content – Retrieves the encoded news URLs. Decode the URLs – Uses a decoding mechanism to extract clean links. Remove unwanted characters – Cleans up the decoded URLs to ensure they are properly formatted. Aggregate results – Outputs a final list of clean, readable URLs. Setup Modify RSS parameters (hl, gl) to match your target region. Adjust the result limit to control the number of processed articles. How to customize this workflow To customize this workflow, you can add an HTTP Request node to retrieve the article's text, an HTML Extract node to process the text, an AI node to generate new content, and a WordPress node to publish it Another option is to use an AI Agent node to classify articles by category based on the title or through HTML Extract. You can then save the classified articles using a Google Sheets node, organizing them by category and creating an high quality editorial plan This workflow efficiently processes Google News RSS, removes unnecessary encoding, and delivers clean, shareable URLs. 🚀
by David Ashby
Complete MCP server exposing 2 NPR Station Finder Service API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add NPR Station Finder Service credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the NPR Station Finder Service API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://station.api.npr.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 V3 (2 endpoints) • GET /v3/stations: Get Station 1 • GET /v3/stations/{stationId}: Retrieve metadata for the station with the given numeric ID 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native NPR Station Finder Service API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.