by Leonard
Open Deep Research - AI-Powered Autonomous Research Workflow Description This workflow automates deep research by leveraging AI-driven search queries, web scraping, content analysis, and structured reporting. It enables autonomous research with iterative refinement, allowing users to collect, analyze, and summarize high-quality information efficiently. How it works 🔹 User Input The user submits a research topic via a chat message. 🧠 AI Query Generation A Basic LLM generates up to four refined search queries to retrieve relevant information. 🔎 SERPAPI Google Search The workflow loops through each generated query and retrieves top search results using the SerpAPI API. 📄 Jina AI Web Scraping Extracts and summarizes webpage content from the URLs obtained via SerpAPI. 📊 AI-Powered Content Evaluation An AI Agent evaluates the relevance and credibility of the extracted content. 🔁 Iterative Search Refinement If the AI finds insufficient or low-quality information, it generates new search queries to improve results. 📜 Final Report Generation The AI compiles a structured markdown report, including sources with citations. Set Up Instructions 🚀 Estimated setup time: ~10-15 minutes ✅ Required API Keys:** SerpAPI → For Google Search results Jina AI → For text extraction OpenRouter → For AI-driven query generation and summarization ⚙️ n8n Components Used:** AI Agents with memory buffering for iterative research Loops to process multiple search queries efficiently HTTP Requests for direct API interactions with SerpAPI and Jina AI 📝 Recommended Enhancements:** Add sticky notes in n8n to explain each step for new users Implement Google Drive or Notion Integration to save reports automatically 🎯 Ideal for: ✔️ Researchers & Analysts - Automate background research ✔️ Journalists - Quickly gather reliable sources ✔️ Developers - Learn how to integrate multiple AI APIs into n8n ✔️ Students - Speed up literature reviews 🔗 Completely free and open-source! 🚀
by Sira Ekabut
Facebook access tokens expire quickly, requiring regular updates for continued API access. This workflow simplifies the process of exchanging short-lived tokens for long-lived ones, saving time and reducing manual effort. What this workflow does Exchanges a short-lived Facebook User Access Token for a long-lived token using the Facebook Graph API. Optionally retrieves a long-lived Page Access Token associated with the user. Outputs both the user and page tokens for further use in automation or integrations. Setup Prerequisites: A valid Facebook App ID and App Secret. A short-lived User Access Token from the Facebook platform. (Optional) The App-Scoped User ID for fetching associated page tokens. Workflow Configuration: Replace placeholder values in the "Set Parameter" node with your Facebook credentials and token. Run the workflow manually to generate long-lived tokens. Documentation Reference: Follow the official Facebook guide for more details: https://developers.facebook.com/docs/facebook-login/guides/access-tokens/get-long-lived/
by Mohan Gopal
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🤖 AI-Powered Document QA System using Webhook, Pinecone + OpenAI + n8n This project demonstrates how to build a Retrieval-Augmented Generation (RAG) system using n8n, and create a simple Question Answer system using Webhook to connect with User Interface (created using Lovable): 🧾 Downloads the pdf file format documents from Google Drive (contract document, user manual, HR policy document etc...) 📚 Converts them into vector embeddings using OpenAI 🔍 Stores and searches them in Pinecone Vector DB 💬 Allows natural language querying of contracts using AI Agents 📂 Flow 1: Document Loading & RAG Setup This flow automates: Reading documents from a Google Drive folder Vectorizing using text-embedding-3-small Uploading vectors into Pinecone for later semantic search 🧱 Workflow Structure A [Manual Trigger] --> B[Google Drive Search] B --> C[Google Drive Download] C --> D[Pinecone Vector Store] D --> E[Default Data Loader] E --> F[Recursive Character Text Splitter] E --> G[OpenAI Embedding] 🪜 Steps Manual Trigger: Kickstarts the workflow on demand for loading new documents. Google Drive Search & Download Node: Google Drive (Search: file/folder) Downloads PDF documents Apply Recursive Text Splitter: Breaks long documents into overlapping chunks Settings: Chunk Size: 1000 Chunk Overlap: 100 OpenAI Embedding Model: text-embedding-3-small Used for creating document vectors Pinecone Vector Store Host: url Index: index Batch Size: 200 Pinecone Settings: Type: Dense Region: us-east-1 Mode: Insert Documents 💬 Flow 2: Chat-Based Q&A Agent This flow enables chat-style querying of stored documents using OpenAI-powered agents with vector memory. 🧱 Workflow Diagram A[Webhook (chat message)] --> B[AI Agent] B --> C[OpenAI Chat Model] B --> D[Simple Memory] B --> E[Answer with Vector Store] E --> F[Pinecone Vector Store] F --> G[Embeddings OpenAI] 🪜 Components Chat (Trigger): Receives incoming chat queries AI Agent Node Handles query flow using: Chat Model: OpenAI GPT Memory: Simple Memory Tool: Question Answer with Vector Store Pinecone Vector Store: Connected via same embedding index as Flow 1 Embeddings: Ensures document chunks are retrievable using vector similarity Response Node: Returns final AI response to user via webhook 🌐 Flow 3: UI-Based Query with Lovable This flow uses a web UI built using Lovable to query contracts directly from a form interface. 📥 Webhook Setup for Lovable Webhook Node Method: POST URL:url Response: Using 'Respond to Webhook' Node 🧱 Workflow Logic A[Webhook (Lovable Form)] --> B[AI Agent] B --> C[OpenAI Chat Model] B --> D[Simple Memory] B --> E[Answer with Vector Store] E --> F[Pinecone Vector Store] F --> G[Embeddings OpenAI] B --> H[Respond to Webhook] 💡 Lovable UI Users can submit: Full Name Email Department Freeform Query: User can enter any freeform query. Data is sent via webhook to n8n and responded with the answer from contract content. 🔍 Use Cases Contract Querying for Legal/HR teams Procurement & Vendor Agreement QA Customer Support Automation (based on terms) RAG Systems for private document knowledge ⚙️ Tools & Tech Stack 📌 Final Notes Pinecone Index: package1536 Dimension: 1536 Chunk Size: 1000, Overlap: 100 Embedding Model: text-embedding-3-small Feel free to fork the workflow or request the full JSON export. Looking forward to your suggestions and improvements!
by Mario
Purpose This workflow allows you to import any workflow from a file or another n8n instance and map the credentials easily. How it works A multi-form setup guides you through the entire process At the beginning you have two options: Upload a workflow file (JSON) Copy workflow from a remote n8n instance If you choose the second option, you get to choose one of your predefined (in the Settings node) remote instances first, then it retrieves a list of all the workflows using the n8n API which you then can choose a workflow from. Now both initial options come together - the workflow file is being processed In parallel all credentials of the current instance are being retrieved using the Execute Command node The next form page enables a mapping of all the credentials used in the workflow. The matching happens between the names (because one workflow can contain different credentials of the same type) of the original credentials and the ones available on the current instance. Every option then shows all available credentials of the same type. In addition the user has always the choice to create a new credential on the fly. For every option which was set to create a new credential, an empty credential is being created on the current instance using the n8n API. An emoji is being appended to the name, which indicates that it needs to be populated. Finally the workflow gets updated with the new credential ID’s and created on the current instance using the n8n API. Then the user gets a message, if the process has succeeded or not. Setup Select your credentials in the nodes which require those Configure your remote instance(s) in the Settings node. (You can skip this step, if you only want to use the File Upload feature) Every instance is defined as object with the keys name, apiKey and baseUrl. Those instances are then wrapped inside an array. You can find an example described within a note on the workflow canvas. How to use Grab the (production) URL of the Form from the first node Open the URL and follow the instructions given in the multi-form Disclaimer Security: Beware, that all credentials are being decrypted and processed within the workflow. Also the API keys to other n8n instances are stored within the workflow. This solution is primarily meant for transferring data between testing environments. For production use consider the n8n enterprise edition which provides a reliable way to deploy workflows between different environments without the need of manual credential mapping.
by David Ashby
Complete MCP server exposing all Oura Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Oura Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Oura Tool tool with full error handling 📋 Available Operations (4 total) Every possible Oura Tool operation is included: 🔧 Profile (1 operations) • Get a profile 🔧 Summary (3 operations) • Get activity summary • Get readiness summary • Get sleep summary 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Oura Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Oura Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Octoleo
Overview This workflow automates the backup of all workflows from your system to a Git repository hosted on Gitea. It runs on a scheduled trigger, fetching, encoding, and committing workflow data, ensuring seamless version control and disaster recovery. 📌 Quick Setup: Just update three global variables and configure authentication—no manual exports needed! How It Works (Quick Glance) 1️⃣ Scheduled Execution → Runs automatically at defined intervals. 2️⃣ Fetch Workflows → Uses the API to retrieve all workflows. 3️⃣ Process Workflows → Converts workflow data into a Git-friendly format. 4️⃣ Commit & Push to Git → Saves workflows in a Gitea repository. Setup Steps (⚡ Takes ~5 min) 1️⃣ Set Global Variables Go to the Globals section in the workflow and update: repo.url* → https://your-gitea-instance.com *(Replace with your actual Gitea URL) repo.name* → workflows *(Repository name where backups will be stored) repo.owner* → octoleo *(Gitea account that owns the repository) 📌 These three variables define where the workflows are stored. 2️⃣ Configure Gitea Authentication Go to your Gitea account* → Generate a *Personal Access Token** In the credential manager, create a new Gitea Token with: Name:** Authorization Value:** Bearer YOUR_PERSONAL_ACCESS_TOKEN 📌 Ensure there is a space after Bearer before the token! 3️⃣ Link Credentials to Git Nodes Attach the Gitea credentials to these three Git nodes: GetGitea** → Retrieves existing repository data PutGitea** → Updates workflows PostGitea** → Adds new workflows 4️⃣ Link Credentials for API Requests Add API authentication** in the node that fetches all workflows. 5️⃣ Test & Activate Run the workflow manually** to confirm backups work. Enable the schedule trigger for automation. 📌 The workflow automatically checks for changes before committing updates. Why Use This Workflow? ✅ Automated Backups → No manual exports needed. ✅ Version Control → Easily track workflow changes. ✅ Simple Setup → Just configure globals & credentials. ✅ Secure → Uses token-based authentication. Next Steps 💬 Have questions? Reach out on the forum! 🚀
by Jimleuk
This n8n workflow demonstrates creating a recipe recommendation chatbot using the Qdrant vector store recommendation API. Use this example to build recommendation features in your AI Agents for your users. How it works For our recipes, we'll use HelloFresh's weekly course and recipes for data. We'll scrape the website for this data. Each recipe is split, vectorised and inserted into a Qdrant Collection using Mistral Embeddings Additionally the whole recipe is stored in a SQLite database for later retrieval. Our AI Agent is setup to recommend recipes from our Qdrant vector store. However, instead of the default similarity search, we'll use the Recommendation API instead. Qdrant's Recommendation API allows you to provide a negative prompt; in our case, the user can specify recipes or ingredients to avoid. The AI Agent is now able to suggest a recipe recommendation better suited for the user and increase customer satisfaction. Requirements Qdrant vector store instance to save the recipes Mistral.ai account for embeddings and LLM agent Customising the workflow This workflow can work for a variety of different audiences. Try different sets of data such as clothes, sports shoes, vehicles or even holidays.
by Geoffrey Saxena
👤 Who is this for? This workflow is great for n8n users who want to prevent duplicate or overlapping workflow runs. If you're a developer, DevOps engineer, or automation enthusiast managing tasks like database updates, syncing tools, or hitting rate-limited APIs, this one’s for you. 🧩 What problem does this solve? In the real world, automations can get triggered at the same time—whether that’s because of multiple webhook calls, overlapping schedules, or retries. And when two workflows try to do the same thing at once (like updating a record or syncing data), it can cause conflicts, data corruption, or wasted API calls. This workflow helps avoid that problem by using Redis as a lock system, so only one instance runs at a time. Think of it like putting up a “🚧 Workflow in Progress” sign while your logic is running. ⚙️ What this workflow does When the workflow starts, it tries to set a Redis key as a lock with a short expiry. If the lock is free: Your main business logic runs. Once it's done, the lock is cleared. If the lock is already taken (i.e., another run is in progress): The workflow will wait and retry a few times. If a duplicate request shows up while one is already being processed: It skips that duplicate to avoid unnecessary work. You can customize both the timeout and retry logic to match your needs. 🛠️ Setup guide To use this template: You’ll need access to a Redis instance (either self-hosted or managed like Upstash, Redis Cloud, etc). Set up your Redis credentials in the n8n Redis node. Swap out the webhook node with your actual trigger or logic. Adjust the lock timeout to match how long your task typically takes. > 💡 Bonus Tip: Use this pattern wherever you need idempotency or want to avoid duplicate processing. 🧪 Example use case Let’s say you have a workflow that syncs ClickUp tickets to Google Sheets. It runs daily at 9 AM and updates tickets, adds notes, and makes sure nothing is missed. But what if two runs start at the same time? Or someone triggers a manual sync while the scheduled one is still working? By wrapping that whole sync inside this Redis locking template, you can make sure it only runs one at a time, saving your APIs (and your sanity).
by Jimleuk
This n8n template showcases a cool feature of n8n Forms where the form itself can be defined dynamically using the form fields schema. It may be debateable how useful this template actually is since both Airtable and Baserow provide form interfaces already but still a great exercise and demonstration if ever the use-case comes around. How it works A form trigger is used to dynamically select a database/table from which to build the n8n form from. the table's schema is imported into the workflow and using the code node, is converted into the n8n form fields schema. This let's us dynamically build the fields in our n8n form when we choose to define the form using the JSON option. Once the n8n form submits, we convert the values back into our table's API schema so that we can create a new row. Note any files/attachments fields are removed as they need to be handled separately. Files are processed separately as they may first need to be stored. Once complete, the reference is saved into the newly created row. Check out the example Airtable here - https://airtable.com/appfP15Xd0aVZR9xV/shrGFgXLyQ4Jg58SU How to use The n8n form is autogenerated which means you only need provide access to the table. Using this approach, this template can be reused for any number of Airtable and/or Baserow tables. Requirements You'll need either an Airtable account or a Baserow account to use this template. Accessible n8n instance to your users Customising this workflow Not using either Airtable or Baserow? Theoretically any datastore which provides a fields schema can be used with this template. If you're feeling creative, split the table into multiple forms for a better user experience.
by Ferenc Erb
Use Case Extend Bitrix24 tasks with custom widgets that display relevant task information and enable seamless interaction through a custom tab interface. What This Workflow Does Processes incoming webhook requests from Bitrix24 task interfaces Handles authentication and secure token validation Manages application installation and placement registration Displays task data in a custom formatted view Stores and retrieves configuration settings persistently Provides user-friendly HTML interfaces for task information Setup Instructions Configure Bitrix24 webhook endpoints for the task widget Set up authentication credentials in your Bitrix24 account Install the application and register the task view tab placement Customize the task data display format as needed Deploy and test the application functionality within Bitrix24 tasks
by Artem Boiko
Revit to HTML Quantity Takeoff Generator Automates extraction of wall quantities from Revit models and creates a professional interactive HTML report. Key Features Automated wall quantity analysis Calculates volumes by wall type ("Type Name") Generates interactive HTML QTO report Includes summary statistics: total elements, total and average volumes Provides detailed breakdown by element type How it works Upload a Revit file as input Workflow extracts wall quantities and types Creates and saves a ready-to-share HTML dashboard with QTO data No API keys required Runs offline Output is a professional, ready-to-use HTML report
by Ranjan Dailata
Who this is for The Async Structured Bulk Data Extract with Bright Data Web Scraper workflow is designed for data engineers, market researchers, competitive intelligence teams, and automation developers who need to programmatically collect and structure high-volume data from the web using Bright Data's dataset and snapshot capabilities. This workflow is built for: Data Engineers - Building large-scale ETL pipelines from web sources Market Researchers - Collecting bulk data for analysis across competitors or products Growth Hackers & Analysts - Mining structured datasets for insights Automation Developers - Needing reliable snapshot-triggered scrapers Product Managers - Overseeing data-backed decision-making using live web information What problem is this workflow solving? Web scraping at scale often requires asynchronous operations, including waiting for data preparation and snapshots to complete. Manual handling of this process can lead to timeouts, errors, or inconsistencies in results. This workflow automates the entire process of submitting a scraping request, waiting for the snapshot, retrieving the data, and notifying downstream systems all in a structured, repeatable fashion. It solves: Asynchronous snapshot completion handling Reliable retrieval of large datasets using Bright Data Automated delivery of scraped results via webhook Disk persistence for traceability or historical analysis What this workflow does Set Bright Data Dataset ID & Request URL: Takes in the Dataset ID and Bright Data API endpoint used to trigger the scrape job HTTP Request: Sends an authenticated request to the Bright Data API to start a scraping snapshot job Wait Until Snapshot is Ready: Implements a loop or wait mechanism that checks snapshot status (e.g., polling every 30 seconds) until completion i.e ready state Download Snapshot: Downloads the structured dataset snapshot once ready Persist Response to Disk: Saves the dataset to disk for archival, review, or local processing Webhook Notification: Sends the final result or a summary of it to an external webhook Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. Update the Set Dataset Id, Request URL for setting the brand content URL. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Polling Strategy : Adjust polling interval (e.g., every 15–60 seconds) based on snapshot complexity Input Flexibility : Accept datasetId and request URL dynamically from a webhook trigger or input form Webhook Output : Send notifications to - Internal APIs – for use in dashboards Zapier/Make – for multi-step automation Persistence Save output to: Remote FTP or SFTP storage Amazon S3, Google Cloud Storage etc.