by AdrianWang
How it works This workflow automates the conversion of various document formats (such as PDF, Word, and PPT) into Markdown. It connects to the MinerU API service, which leverages OCR, formula, and table recognition to produce high-quality output. Users can initiate the process by simply uploading a document through an n8n chat interface. Set up steps Ensure you have a local n8n instance running. Set up and run the MinerU MCP (MinerU Computing Platform) server locally. Import this workflow into your n8n instance. Configure your AI model credentials (e.g., for OpenAI, add your API Key and Base URL). Click the "Write Files from Disk" node and edit the file path to your desired local save location. Click the "MCP Client" node and input your MinerU MCP server address (e.g., http://localhost:8000/sse). Click the "Open Chat" button to upload a file, send a message, and test the workflow.
by Damian Karzon
This workflow randomly select recipes from a Mealie instance (can use a specific category) and then creates a meal plan in Mealie with those recipes. How it works: Workflow has a scheduled trigger (set to run weekly on a Friday) Config node sets a few properties to configure the workflow A call to the Mealie API to get the list of recipes The code node holds most of the logic, this will loop through the number of recipes defined in the config node and randomly select a recipe from the list (making sure not to double up any recipes) Once all the recipes are selected it will call the Mealie API to set up the meal plan on the days Setup Add your Mealie API token as a credential and set it on the Http Request nodes Set the relevant schedule trigger to run when you like Update the Config node with the config you want numberOfRecipes - Number of recipes to populate for the meal plan offsetPlanDays - Number of days in the future to start the plan (0 will start it today, 1 tomorrow, etc.) mealieCategoryId - A category id of the category you want to pull in recipes from (default to select from all recipes) mealieBaseUrl - The base url of your Mealie instance
by Jimleuk
This n8n template demonstrates how to get started with Gemini 2.0's new Bounding Box detection capabilities in your workflows. The key difference being this enables prompt-based object detection for images which is pretty powerful for things like contextual search over an image. eg. "Put a bounding box around all adults with children in this image" or "Put a bounding box around cars parked out of bounds of a parking space". How it works An image is downloaded via the HTTP node and an "Edit Image" node is used to extract the file's width and height. The image is then given to the Gemini 2.0 API to parse and return coordinates of the bounding box of the requested subjects. In this demo, we've asked for the AI to identify all bunnies. The coordinates are then rescaled with the original image's width and height to correctl align them. Finally to measure the accuracy of the object detection, we use the "Edit Image" node to draw the bounding boxes onto the original image. How to use Really up to the imagination! Perhaps a form of grounding for evidence based workflows or a higher form of image search can be built. Requirements Google Gemini for LLM Customising the workflow This template is just a demonstration of an experimental version of Gemini 2.0. It is recommended to wait for Gemini 2.0 to come out of this stage before using in production.
by Srinivasan KB
This n8n workflow provides a ready-to-use API endpoint for extracting structured data from images. It processes an image URL using an AI-powered OCR model and returns the extracted details in a structured JSON format. Use Cases Document OCR** – Extract details from ID cards, invoices, receipts, etc. Text Extraction from Images** – Process screenshots, scanned documents, and photos. Automated Form Processing** – Digitize and capture information from paper forms. Business Card Data Extraction** – Extract names, emails, and phone numbers from business cards. How It Works Send a GET request with an image URL and define the required extraction parameters. The image is converted to base64 for processing. The AI model (Gemini API - Flash Lite) extracts relevant text. The response returns structured JSON data containing only the requested fields. Features ✔️ No-Code API Setup – Easily integrate into any application. ✔️ Customizable Extraction – Modify the request parameters to fit your needs. ✔️ AI-Powered OCR – Uses advanced models for accurate text recognition. ✔️ Automated Processing – Ideal for document processing and digitization. Integration Works with any frontend/backend system that supports API calls. Can be used for workflow automation in CRM, ERP, and document management solutions. Supports further customization based on specific OCR requirements.
by Khairul Muhtadin
The Error Notification workflow is designed to instantly notify you whenever any other n8n workflow encounters an error, using popular communication channels like Telegram and Gmail—with optional support for Discord, Slack, and WhatsApp. 💡 Why Use Error Notification workflow? Immediate Awareness:** Get instant alerts when workflows fail, preventing unnoticed errors and downtime. Multi-Channel Flexibility:** Notify your team via Telegram, Gmail, and optionally Slack, Discord, or WhatsApp. Detailed Context:** Receive rich error information including the error message, node name, time, and execution link for quicker fixes. Easy Integration:** Built with native n8n nodes and customizable code, simple to adopt without complex setup. Open Source & Free:** Use and adapt this workflow at no cost, making professional error monitoring accessible. ⚡ Who Is This For? n8n Workflow Developers:** Quickly spot and respond to automation issues in development or production. Operations Teams:** Maintain uptime and swiftly troubleshoot errors across multiple workflows. Small to Medium Businesses:** Gain professional error alerting without expensive monitoring tools. Automation Enthusiasts:** Enhance your automation reliability with real-time failure notifications. ❓ What Problem Does It Solve? This workflow embedd error detection and notification directly within your n8n instance. It automates the process of catching errors as they occur, compiling meaningful context, and delivering it instantly via your preferred messaging platforms. This drastically reduces your response time to issues and streamlines error management, improving your automation reliability and operational confidence. 🔧 What This Workflow Does ⏱ Trigger: Listens for any error generated in your n8n workflows using the n8n Error Trigger node. 📎 Step 2: Executes a Code node that formats a detailed error message capturing workflow name, error node, description, timestamp, and an execution URL. 🔍 Step 3: Sends the formatted error notification to multiple communication channels: Telegram and Gmail by default, plus optionally Discord, Slack, and WhatsApp (disabled by default). 💌 Step 4: Delivers rich, parsed HTML-formatted messages to ensure error readability and immediate actionability. 🔐 Setup Instructions Import the provided .json file into your n8n instance (Cloud or self-hosted). Set up credentials: Gmail OAuth credentials for sending emails via Gmail node Telegram API credentials for Telegram notifications (Optional) Discord Webhook URL credential for Discord notifications (Optional) Slack Webhook credential for Slack notifications (Optional) WhatsApp connection credentials (if enabled) Customize the Code node if needed to adjust the error message format or target chat IDs. Update the chat IDs and recipient details in each notification node according to your channels. Test the workflow by manually triggering an error in another workflow to verify proper notifications. 🧩 Pre-Requirements Active n8n instance (cloud or self-hosted) with version supporting Error Trigger node Telegram bot credentials and chat ID (Optional) Gmail, Discord, Slack, or WhatsApp accounts and webhook credentials if you want to use those channels 🛠️ Customize It Further Enable and configure additional notification nodes like Slack or WhatsApp to fit your team's communication style. Customize the error message template in the Code node to include extra metadata or format it differently (e.g., markdown). Integrate with incident management tools via webhook nodes or create tickets automatically on error. 🧠 Nodes Used Error Trigger Code Telegram Gmail Discord (disabled) Slack (disabled) WhatsApp (disabled) Sticky Note (for description) 📞 Support Made by: khaisa Studio Tag: notification,error,monitoring,workflow,automation,alerts Category: Monitoring & Alerts Need a custom? Need a custom? contact me on LinkedIn or Web
by M Sayed
Stop guessing currency rates! Get a quick and clean exchange rate summary sent right to your phone. 📲 This workflow automatically checks the latest rates and builds a simple report for you. What it does: 🤑 Fetches the very latest exchange rates from an API. 🌍 Shows you what major currencies (like USD & EUR) are worth in your chosen local currency. ✍️ Creates a simple, easy-to-read report. 🚀 Delivers it straight to your Telegram! Setup is easy: All you need to do is set your base currency (e.g., 'EGP') and your Telegram Chat ID. Done! ✅
by Laura Piraux
Use case This automation is for teams working in Notion. When you have a lot of back and forth in the comment section, it’s easy to lose track of what is going on in the conversation. This automation relies on AI to generate a summary of the comment section. How it works Every hour (the trigger can be adapted to your need and usecase), the automation checks if new comments have been added to the pages of your Notion database. If there are new comments, the comments are sent to an AI model to write a summary. The summary is then added to a predefined page property. The automation also updates a “Last execution” property. This prevents to re-generate the AI summary when no new comments have been received. Setup Define your Notion variables: Notion database, property that will hold the AI summary, property that will hold the last execution date of the automation. Set up your Notion credentials. Set up your AI model credentials (API key). How to adjust it to your needs Use the LLM model of your choice. In this template, I used Gemini but you can easily replace it by ChatGPT, Claude, etc. Adapt the prompt to your use case to get better summaries: specify the maximum number of characters, give an example, etc. Adapt the trigger to your needs. You could use Notion webhooks as trigger in order to run the automation only when a new comment is added (this setup is advised if you’re on n8n cloud version).
by Yaron Been
Automated pipeline that extracts job listings from Upwork and exports them to Google Sheets for better organization, analysis, and team collaboration. 🚀 What It Does Fetches job postings based on saved searches Extracts key job details (title, budget, description) Organizes data in Google Sheets Updates in real-time Supports multiple search criteria 🎯 Perfect For Freelancers tracking opportunities Teams managing multiple projects Agencies monitoring client needs Market researchers Business analysts ⚙️ Key Benefits ✅ Centralized job board ✅ Easy sharing with team members ✅ Advanced filtering and sorting ✅ Historical data tracking ✅ Customizable data points 🔧 What You Need Upwork account Google account n8n instance Google Sheets setup 📊 Data Exported Job title and description Budget and hourly rate Client information Posted date Required skills Job URL 🛠️ Setup & Support Quick Setup Get started in 15 minutes with our step-by-step guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Streamline your job search and opportunity tracking with automated data collection and organization.
by PollupAI
LinkedIn Profile Enrichment Workflow Who is this for? This workflow is ideal for recruiters, sales professionals, and marketing teams who need to enrich LinkedIn profiles with additional data for lead generation, talent sourcing, or market research. What problem is this workflow solving? Manually gathering detailed LinkedIn profile information can be time-consuming and prone to errors. This workflow automates the process of enriching profile data from LinkedIn, saving time and ensuring accuracy. What this workflow does Input: Reads LinkedIn profile URLs from a Google Sheet. Validation: Filters out already enriched profiles to avoid redundant processing. Data Enrichment: Uses RapidAPI's Fresh LinkedIn Profile Data API to retrieve detailed profile information. Output: Updates the Google Sheet with enriched profile data, appending new information efficiently. Setup Google Sheet: Create a sheet with a column named linkedin_url and populate it with the profile URLs to enrich. RapidAPI Account: Sign up at RapidAPI and subscribe to the Fresh LinkedIn Profile Data API. API Integration: Replace the x-rapidapi-key and x-rapidapi-host values with your credentials from RapidAPI. Run the Workflow: Trigger the workflow and monitor the updates to your Google Sheet. How to customize this workflow Filter Criteria**: Modify the filter step to include additional conditions for processing profiles. API Configuration**: Adjust API parameters to retrieve specific fields or extend usage. Output Format**: Customize how the enriched data is appended to the Google Sheet (e.g., format, column mappings). Error Handling**: Add steps to handle API rate limits or missing data for smoother automation. This workflow streamlines LinkedIn profile enrichment, making it faster and more effective for data-driven decision-making.
by Thomas Janssen
Build an MCP Server which has access to a semantic database to perform Retrieval Augmented Generation (RAG) Tutorial Click here to watch the full tutorial on YouTube How it works This MCP Server has access to a local semantic database (Qdrant) and answers questions being asked to the MCP Client. AI Agent Template Click here to navigate to the AI Agent n8n workflow which uses this MCP server Warning This flow only runs local and cannot be executed on the n8n cloud platform because of the MCP Client Community Node. Installation Install n8n + Ollama + Qdrant using the Self-hosted AI starter kit Make sure to install Llama 3.2 and mxbai-embed-large as embeddings model. Activate the n8n flow Run the "RAG Ingestion Pipeline" and upload some PDF documents How to use it Run the MCP Client workflow and ask a question. It will be either answered by using the semantic database or the search engine API. More detailed instructions Missed a step? Find more detailed instructions here: https://brightdata.com/blog/ai/news-feed-n8n-openai-bright-data
by Yaron Been
Workflow Overview This cutting-edge n8n automation is a sophisticated market research and intelligence gathering tool designed to transform web content discovery into actionable insights. By intelligently combining web crawling, AI-powered filtering, and smart summarization, this workflow: Discovers Relevant Content: Automatically crawls target websites Identifies trending topics Extracts comprehensive article details Intelligent Content Filtering: Applies custom keyword matching Filters for most relevant articles Ensures high-quality information capture AI-Powered Summarization: Generates concise, meaningful summaries Extracts key insights Provides quick, digestible information Seamless Delivery: Sends summaries directly to Slack Enables instant team communication Facilitates rapid information sharing Key Benefits 🤖 Full Automation: Continuous market intelligence 💡 Smart Filtering: Precision content discovery 📊 AI-Powered Insights: Intelligent summarization 🚀 Instant Delivery: Real-time team updates Workflow Architecture 🔹 Stage 1: Content Discovery Scheduled Trigger**: Daily market research FireCrawl Integration**: Web content crawling Comprehensive Site Scanning**: Extracts article metadata Captures full article content Identifies key information sources 🔹 Stage 2: Intelligent Filtering Keyword-Based Matching** Relevance Assessment** Custom Domain Optimization**: AI and technology focus Startup and innovation tracking 🔹 Stage 3: AI Summarization OpenAI GPT Integration** Contextual Understanding** Concise Insight Generation**: 3-point summary format Captures essential information 🔹 Stage 4: Team Notification Slack Integration** Instant Information Sharing** Formatted Insight Delivery** Potential Use Cases Market Research Teams**: Trend tracking Innovation Departments**: Technology monitoring Startup Ecosystems**: Competitive intelligence Product Management**: Industry insights Strategic Planning**: Rapid information gathering Setup Requirements FireCrawl API Web crawling credentials Configured crawling parameters OpenAI API GPT model access Summarization configuration API key management Slack Workspace Channel for insights delivery Appropriate app permissions Webhook configuration n8n Installation Cloud or self-hosted instance Workflow configuration API credential management Future Enhancement Suggestions 🤖 Multi-source crawling 📊 Advanced sentiment analysis 🔔 Customizable alert mechanisms 🌐 Expanded topic tracking 🧠 Machine learning refinement Technical Considerations Implement robust error handling Use exponential backoff for API calls Maintain flexible crawling strategies Ensure compliance with website terms of service Ethical Guidelines Respect content creator rights Use data for legitimate research Maintain transparent information gathering Provide proper attribution Workflow Visualization [Daily Trigger] ⬇️ [Web Crawling] ⬇️ [Content Filtering] ⬇️ [AI Summarization] ⬇️ [Slack Delivery] Connect With Me Ready to revolutionize your market research? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your information gathering with intelligent, automated workflows! #AIResearch #MarketIntelligence #AutomatedInsights #TechTrends #WebCrawling #AIMarketing #InnovationTracking #BusinessIntelligence #DataAutomation #TechNews
by phil
This workflow automates web scraping of Amazon search result pages by retrieving raw HTML, cleaning it to retain only the relevant product elements, and then using an LLM to extract structured product data (name, description, rating, reviews, and price), before saving the results back to Google Sheets. It integrates Google Sheets to supply and collect URLs, BrightData to fetch page HTML, a custom n8n Function node to sanitize the HTML, LangChain (OpenRouter GPT-4) to parse product details, and Google Sheets again to store the output. URL to scape . Result Who Needs Amazon Search Result Scraping? This scraping workflow is ideal for teams and businesses that need to monitor Amazon product listings at scale: E-commerce Analysts** – Track competitor pricing, ratings, and inventory trends. Market Researchers** – Collect data on product popularity and reviews for market analysis. Data Teams** – Automate ingestion of product metadata into BI pipelines or data lakes. Affiliate Marketers** – Keep affiliate catalogs up to date with latest product details and prices. If you need reliable, structured data from Amazon search results delivered directly into your spreadsheets, this workflow saves you hours of manual copy-and-paste. Why Use This Workflow? End-to-End Automation** – From URL list to clean JSON output in Sheets. Robust HTML Cleaning** – Strips scripts, styles, unwanted tags, and noise. Accurate Structured Parsing** – Leverages GPT-4 via LangChain for reliable extraction. Scalable & Repeatable** – Processes thousands of URLs in batches. Step-by-Step: How This Workflow Scrapes Amazon Get URLs from Google Sheets – Reads a list of search result URLs. Loop Over Items – Iterates through each URL in controlled batches. Fetch Raw HTML – Uses BrightData’s Web Unlocker proxy to retrieve the page. Clean HTML – A Function node removes doctype, scripts, styles, head, comments, classes, and non-whitelisted tags, collapsing extra whitespace. Extract with LLM – Passes cleaned HTML into LangChain → GPT-4 to output JSON for each product: name, description, rating, reviews, price Save Results – Appends the JSON fields as columns back into a “results” sheet in Google Sheets. Customization: Tailor to Your Needs Adaptable Sites** – This workflow can be adapted to any e-commerce or other website, for example Walmart or eBay. Whitelist Tags** – Modify the allowedTags array in the Code node to keep additional HTML elements. Schema Changes** – Update the Structured Output Parser schema to include more fields (e.g., availability, SKU). Alternate Data Sink** – Instead of Sheets, route output to a database, CSV file, or webhook. 🔑 Prerequisites Google Sheets Credentials** – OAuth credentials configured in n8n. BrightData API token** – Stored in n8n credentials as BRIGHTDATA_TOKEN. OpenRouter API Key** – Configured for the LangChain node to call GPT-4. n8n Instance** – Self-hosted or cloud with sufficient quota for HTTP requests and LLM calls. 🚀 Installation & Setup Configure Credentials** In n8n, set up Google Sheets OAuth under “Credentials.” Add BrightData token as a new HTTP Request credential. Create an OpenRouter API key credential for the LangChain node. Import the Workflow** Copy the JSON workflow into n8n’s “Import” dialog. Map your Google Sheet IDs and GIDs to the {{WEB_SHEET_ID}}, {{TRACK_SHEET_GID}}, and {{RESULTS_SHEET_GID}} placeholders. Ensure the BRIGHTDATA_TOKEN credential is selected on the HTTP Request node. Test & Run** Add a few Amazon search URLs to your “track” sheet. Execute the workflow and verify product data appears in your “results” sheet. Tweak batch size or parser schema as needed. ⚠ Important API Rate Limits** – Monitor your BrightData and OpenRouter usage to avoid throttling. Amazon’s Terms** – Ensure your scraping complies with Amazon’s policies and legal requirements. Summary This workflow delivers a fully automated, scalable solution to extract structured product data from Amazon search pages directly into Google Sheets—streamlining your competitive analysis and data collection. 🚀 Phil | Inforeole