by Obsidi8n
How it works: Send notes from Obsidian via Webhook to start the audio conversion OpenAI converts your text to natural-sounding audio and generates episode descriptions Audio files are stored in Cloudinary and automatically attached to your notes in Obsidian A professional podcast feed is generated, compatible with all major podcast platforms (Apple, Spotify, Google) Set up steps: Install and configure the Post Webhook Plugin in Obsidian Set up Custom Auth credentials in n8n for Cloudinary using the following JSON: { "name": "Cloudinary API", "type": "httpHeaderAuth", "authParameter": { "type": "header", "key": "Authorization", "value": "Basic {{Buffer.from('your_api_key:your_api_secret').toString('base64')}}" } } Configure podcast feed metadata (title, author, cover image, etc.) Note: The second flow is a generic Podcast Feed module that can be reused in any '[...]-to-Podcast' workflow. It generates a standard RSS feed from Google Sheets data and podcast metadata, making it compatible with all major podcast platforms.
by Ranjan Dailata
Who is this for? This workflow automates the process of querying Bing's Copilot Search, extracting structured data from the results, summarizing the information, and sending a notification via webhook. It leverages the Microsoft Copilot to retrieve search results and integrates AI-powered tools for data extraction and summarization. What problem is this workflow solving? Data Analysts and Researchers: Who need to gather and summarize information from Bing search results efficiently. Developers and Engineers: Looking to integrate Bing search data into applications or services. Digital Marketers and SEO Specialists: Interested in monitoring search engine results for specific keywords or topics. What this workflow does Manually extracting and summarizing information from search engine results can be time-consuming and error-prone. This workflow automates the process by: Performing Bing searches using Bright Data's Bing Search API. Extracting structured data from the search results. Summarizing the extracted information using AI tools. Sending the summarized data to a specified endpoint via webhook. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Perform a Bing Copilot Request node with the prompt you wish to perform the search. Update the Structured Data Webhook Notifier node with the Webhook endpoint of your choice. Update the Summary Webhook Notifier node with the Webhook endpoint of your choice. How to customize this workflow to your needs Modify Search Queries: Adjust the search terms to target different topics or keywords. Change Data Extraction Logic: Customize the extraction process to capture specific data points from the search results. Alter Summarization Techniques: Integrate different AI models or adjust parameters to change how summaries are generated. Update Webhook Endpoints: Direct the summarized data to different endpoints as required. Schedule Workflow Runs: Set up automated triggers to run the workflow at desired intervals.
by Jakkrapat Ampring
Description This workflow automatically generates personalized certificates in Google Slides and emails them to respondents only if they meet a minimum score threshold, using data submitted via Google Forms (stored in Google Sheets). Ideal for: Online courses Quizzes and workshops Event participation certificates Sheet Requirements Your connected Google Sheet (from the Google Form) must contain: Full Name – The name to appear on the certificate. Email – Recipient’s email address. Score – The test/quiz score used for threshold logic. Setup Instructions Connect Google Sheets – Make sure your Form responses are linked to a Sheet with the columns mentioned above. Set Score Threshold – Modify the If node to your desired minimum score (e.g., >= 80). Customize Certificate Template – Use a Google Slides file with text placeholders like {{Full Name}}. Connect Gmail & Google Drive – For sending emails and saving generated certificates. Update File IDs – Replace any placeholder Slide and Drive file IDs with your own. Services Used Google Sheets (Form responses) Google Slides (Certificate template) Google Drive (Storage) Gmail (Email delivery) Troubleshooting Issue: "Cannot read property 'Score'" → Ensure your column names match exactly (Score, Full Name, etc.). Slides not replacing placeholders → Double-check placeholder format ({{Full Name}}) and capitalization. Emails not sending → Verify Gmail authentication and make sure the If node is correctly filtering results.
by David Ashby
Complete MCP server exposing 2 Analytics API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Analytics API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Analytics API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Rate_Limit (1 endpoints) • GET /rate_limit/: Retrieve Application Rate Limits 🔧 User_Rate_Limit (1 endpoints) • GET /user_rate_limit/: Retrieve User Rate Limits 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Analytics API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
🛠️ MailerLite Tool MCP Server Complete MCP server exposing all MailerLite Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every MailerLite Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n MailerLite Tool tool with full error handling 📋 Available Operations (4 total) Every possible MailerLite Tool operation is included: 🔧 Subscriber (4 operations) • Create a subscriber • Get a subscriber • Get many subscribers • Update a subscriber 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native MailerLite Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every MailerLite Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by simonscrapes
Use Case Transform web pages into AI-friendly markdown format: You need to process webpage content for LLM analysis You want to extract both content and links from web pages You need clean, formatted text without HTML markup You want to respect API rate limits while crawling pages What this Workflow Does The workflow uses Firecrawl.dev API to process webpages: Converts HTML content to markdown format Extracts all links from each webpage Handles API rate limiting automatically Processes URLs in batches from your database Setup Create a Firecrawl.dev account and get your API key Add your Firecrawl API key to the HTTP Request node's Authorization header Connect your URL database to the input node (column name must be "Page") or edit the array in Example fields from data source Configure your preferred output database connection How to Adjust it to Your Needs Modify input source to pull URLs from different databases Adjust rate limiting parameters if needed Customize output format for your specific use case More templates and n8n workflows >>> @simonscrapes
by Agentick AI
This n8n workflow automates the process of collecting job and decision-maker data, crafting AI-generated referral messages, and drafting them in Gmail—all using a combination of Apify, Google Sheets, LLMs, and email APIs. Use cases Auto-sourcing job postings from LinkedIn via Apify Identifying decision-makers at relevant companies Auto-drafting custom referral request messages using AI Exporting structured data to Google Sheets and drafting Gmail messages for outreach Good to know You can customize the filtering logic to target specific cities or companies. Message creation uses the Gemini 2.0 Flash model and LangChain’s output parser for structured messages. Email data is fetched using Anymailfinder, but can be replaced with other providers like Hunter.io. Gmail API drafts the message, but you need to enable Gmail API access from your Google Cloud console. How it works Trigger A Schedule Trigger runs the automation daily. Job Data Extraction Apify pulls job listings using a predefined actor. The HTTP response is split and structured using the Split Out node. Store Job Data Job listings are saved to a Google Sheet. The node maps key fields like title, company, location, and poster info. Decision-Maker Discovery Another Apify actor pulls decision-maker data from LinkedIn. This is split and filtered (e.g., by city or company name). Store Contacts Contact details (name, title, location, etc.) are appended to another Google Sheet (n8n-sheet). Message Generation A LLM Chain uses Gemini 2.0 Flash to generate short, custom LinkedIn messages. The message respects rules like tone, length (<100 words), and personalization. Parse & Merge AI Output The output is structured using Structured Output Parser and merged with contact data. Save Final Messages The final headline and body are stored back into Google Sheets (n8n-sheet). Email Discovery Get Email IDs node hits Anymailfinder API using the LinkedIn profile link. Draft in Gmail Using Gmail API, the message is drafted in your inbox with subject and body auto-filled. How to use Update Apify actor inputs to specify roles, companies, or locations. Replace the manual Schedule Trigger with a webhook or form input if desired. Update the Google Sheets document and sheet name in the relevant nodes. Add your Gmail and Anymailfinder credentials in n8n settings. Requirements Google Sheets API access Gmail API access Apify account Gemini API key (via Google AI Studio) Anymailfinder (or alternate email discovery API) Customizing this workflow This framework is highly modular. You can: Add more filters for company size, role, or hiring urgency Use alternate LLMs (OpenAI, Claude, etc.) Switch output channels (Slack, WhatsApp, etc.) Plug in different CRM tools for follow-ups
by Leonard
Open Deep Research - AI-Powered Autonomous Research Workflow Description This workflow automates deep research by leveraging AI-driven search queries, web scraping, content analysis, and structured reporting. It enables autonomous research with iterative refinement, allowing users to collect, analyze, and summarize high-quality information efficiently. How it works 🔹 User Input The user submits a research topic via a chat message. 🧠 AI Query Generation A Basic LLM generates up to four refined search queries to retrieve relevant information. 🔎 SERPAPI Google Search The workflow loops through each generated query and retrieves top search results using the SerpAPI API. 📄 Jina AI Web Scraping Extracts and summarizes webpage content from the URLs obtained via SerpAPI. 📊 AI-Powered Content Evaluation An AI Agent evaluates the relevance and credibility of the extracted content. 🔁 Iterative Search Refinement If the AI finds insufficient or low-quality information, it generates new search queries to improve results. 📜 Final Report Generation The AI compiles a structured markdown report, including sources with citations. Set Up Instructions 🚀 Estimated setup time: ~10-15 minutes ✅ Required API Keys:** SerpAPI → For Google Search results Jina AI → For text extraction OpenRouter → For AI-driven query generation and summarization ⚙️ n8n Components Used:** AI Agents with memory buffering for iterative research Loops to process multiple search queries efficiently HTTP Requests for direct API interactions with SerpAPI and Jina AI 📝 Recommended Enhancements:** Add sticky notes in n8n to explain each step for new users Implement Google Drive or Notion Integration to save reports automatically 🎯 Ideal for: ✔️ Researchers & Analysts - Automate background research ✔️ Journalists - Quickly gather reliable sources ✔️ Developers - Learn how to integrate multiple AI APIs into n8n ✔️ Students - Speed up literature reviews 🔗 Completely free and open-source! 🚀
by Sira Ekabut
Facebook access tokens expire quickly, requiring regular updates for continued API access. This workflow simplifies the process of exchanging short-lived tokens for long-lived ones, saving time and reducing manual effort. What this workflow does Exchanges a short-lived Facebook User Access Token for a long-lived token using the Facebook Graph API. Optionally retrieves a long-lived Page Access Token associated with the user. Outputs both the user and page tokens for further use in automation or integrations. Setup Prerequisites: A valid Facebook App ID and App Secret. A short-lived User Access Token from the Facebook platform. (Optional) The App-Scoped User ID for fetching associated page tokens. Workflow Configuration: Replace placeholder values in the "Set Parameter" node with your Facebook credentials and token. Run the workflow manually to generate long-lived tokens. Documentation Reference: Follow the official Facebook guide for more details: https://developers.facebook.com/docs/facebook-login/guides/access-tokens/get-long-lived/
by Mohan Gopal
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🤖 AI-Powered Document QA System using Webhook, Pinecone + OpenAI + n8n This project demonstrates how to build a Retrieval-Augmented Generation (RAG) system using n8n, and create a simple Question Answer system using Webhook to connect with User Interface (created using Lovable): 🧾 Downloads the pdf file format documents from Google Drive (contract document, user manual, HR policy document etc...) 📚 Converts them into vector embeddings using OpenAI 🔍 Stores and searches them in Pinecone Vector DB 💬 Allows natural language querying of contracts using AI Agents 📂 Flow 1: Document Loading & RAG Setup This flow automates: Reading documents from a Google Drive folder Vectorizing using text-embedding-3-small Uploading vectors into Pinecone for later semantic search 🧱 Workflow Structure A [Manual Trigger] --> B[Google Drive Search] B --> C[Google Drive Download] C --> D[Pinecone Vector Store] D --> E[Default Data Loader] E --> F[Recursive Character Text Splitter] E --> G[OpenAI Embedding] 🪜 Steps Manual Trigger: Kickstarts the workflow on demand for loading new documents. Google Drive Search & Download Node: Google Drive (Search: file/folder) Downloads PDF documents Apply Recursive Text Splitter: Breaks long documents into overlapping chunks Settings: Chunk Size: 1000 Chunk Overlap: 100 OpenAI Embedding Model: text-embedding-3-small Used for creating document vectors Pinecone Vector Store Host: url Index: index Batch Size: 200 Pinecone Settings: Type: Dense Region: us-east-1 Mode: Insert Documents 💬 Flow 2: Chat-Based Q&A Agent This flow enables chat-style querying of stored documents using OpenAI-powered agents with vector memory. 🧱 Workflow Diagram A[Webhook (chat message)] --> B[AI Agent] B --> C[OpenAI Chat Model] B --> D[Simple Memory] B --> E[Answer with Vector Store] E --> F[Pinecone Vector Store] F --> G[Embeddings OpenAI] 🪜 Components Chat (Trigger): Receives incoming chat queries AI Agent Node Handles query flow using: Chat Model: OpenAI GPT Memory: Simple Memory Tool: Question Answer with Vector Store Pinecone Vector Store: Connected via same embedding index as Flow 1 Embeddings: Ensures document chunks are retrievable using vector similarity Response Node: Returns final AI response to user via webhook 🌐 Flow 3: UI-Based Query with Lovable This flow uses a web UI built using Lovable to query contracts directly from a form interface. 📥 Webhook Setup for Lovable Webhook Node Method: POST URL:url Response: Using 'Respond to Webhook' Node 🧱 Workflow Logic A[Webhook (Lovable Form)] --> B[AI Agent] B --> C[OpenAI Chat Model] B --> D[Simple Memory] B --> E[Answer with Vector Store] E --> F[Pinecone Vector Store] F --> G[Embeddings OpenAI] B --> H[Respond to Webhook] 💡 Lovable UI Users can submit: Full Name Email Department Freeform Query: User can enter any freeform query. Data is sent via webhook to n8n and responded with the answer from contract content. 🔍 Use Cases Contract Querying for Legal/HR teams Procurement & Vendor Agreement QA Customer Support Automation (based on terms) RAG Systems for private document knowledge ⚙️ Tools & Tech Stack 📌 Final Notes Pinecone Index: package1536 Dimension: 1536 Chunk Size: 1000, Overlap: 100 Embedding Model: text-embedding-3-small Feel free to fork the workflow or request the full JSON export. Looking forward to your suggestions and improvements!
by Mario
Purpose This workflow allows you to import any workflow from a file or another n8n instance and map the credentials easily. How it works A multi-form setup guides you through the entire process At the beginning you have two options: Upload a workflow file (JSON) Copy workflow from a remote n8n instance If you choose the second option, you get to choose one of your predefined (in the Settings node) remote instances first, then it retrieves a list of all the workflows using the n8n API which you then can choose a workflow from. Now both initial options come together - the workflow file is being processed In parallel all credentials of the current instance are being retrieved using the Execute Command node The next form page enables a mapping of all the credentials used in the workflow. The matching happens between the names (because one workflow can contain different credentials of the same type) of the original credentials and the ones available on the current instance. Every option then shows all available credentials of the same type. In addition the user has always the choice to create a new credential on the fly. For every option which was set to create a new credential, an empty credential is being created on the current instance using the n8n API. An emoji is being appended to the name, which indicates that it needs to be populated. Finally the workflow gets updated with the new credential ID’s and created on the current instance using the n8n API. Then the user gets a message, if the process has succeeded or not. Setup Select your credentials in the nodes which require those Configure your remote instance(s) in the Settings node. (You can skip this step, if you only want to use the File Upload feature) Every instance is defined as object with the keys name, apiKey and baseUrl. Those instances are then wrapped inside an array. You can find an example described within a note on the workflow canvas. How to use Grab the (production) URL of the Form from the first node Open the URL and follow the instructions given in the multi-form Disclaimer Security: Beware, that all credentials are being decrypted and processed within the workflow. Also the API keys to other n8n instances are stored within the workflow. This solution is primarily meant for transferring data between testing environments. For production use consider the n8n enterprise edition which provides a reliable way to deploy workflows between different environments without the need of manual credential mapping.
by Jaber Zare
Who is this for? This workflow is for DevOps engineers, system administrators, and Docker users who want to automate the process of checking for updates, verifying them, and applying updates to their Docker-based deployment in a controlled manner. For example, this workflow is used to update the n8n Docker image. What does this workflow do? Fetches the latest Docker image version from GitHub. Compares it with the currently running version on the server. Sends a Telegram message requesting confirmation if an update is available. If confirmed, pulls the latest n8n Docker image, updates the container, and restarts it. Sends a Telegram message confirming the update is completed. Schedules automatic checks, Uses a cron trigger in n8n to check for updates at regular intervals. Setup Ensure n8n is installed and running in a Docker container. Create a Telegram bot using BotFather. Set up Telegram credentials. Set up SSH credentials (ensure the SSH user has sudo access). Obtain the bot token and chat ID. Set the Default variable node telegram-id (You can find it by messaging @get_id_bot). n8n-container-name (Specify the name of the n8n container.) working-directory (The directory where your docker-compose.yml is). You can use a manual trigger or a schedule trigger to update (for n8n, I use Cron every 3 days to check for a new version). How to customize this workflow Change the update mechanism: Modify the Docker commands if using a different container runtime or orchestration tool. Modify the confirmation flow: Add extra validation steps, such as checking for breaking changes before updating. Use a different notification method: Replace Telegram with Slack, email, or another notification service.