by Miquel Colomer
This n8n workflow template automates the process of collecting and delivering the "Top Deals of the Day" from MediaMarkt, tailored to user preferences. By combining user-submitted forms, Bright Data web scraping, GPT-4o-mini deal generation, and email delivery, this workflow sends personalized product recommendations straight to a user’s inbox. > ⚠️ Note: This workflow uses community nodes (Bright Data and Document Generator) which only work on *self-hosted n8n instances*. 🚀 What It Does Collects user preferences via a form (categories + email) Scrapes MediaMarkt’s deals page using Bright Data Uses GPT-4o-mini (OpenAI) to recommend top deals Generates a structured HTML email using a template Sends the personalized deals directly via email 🧩 Community Node Integration We created and used the following community nodes: Bright Data** – To scrape MediaMarkt deals using proxy-based scraping Document Generator** – To generate a templated HTML document from deal data These nodes are not available in n8n Cloud and require self-hosted n8n. 🛠️ Step-by-Step Setup Install Community Nodes Make sure you're on a self-hosted n8n instance. Install: n8n-nodes-brightdata n8n-nodes-document-generator Configure Credentials Bright Data API Key (Proxy + Scraping setup) OpenAI API Key (GPT-4o-mini access) SMTP Credentials for sending emails Customize the Form Adapt the form node to collect desired categories and email addresses. Typical categories include appliances, phones, laptops, etc. Design Your HTML Template In the Document Generator node, you can tweak the HTML/CSS to change how deals appear in the final email. Test the Workflow Submit the form with test data and check that the entire flow—from scraping to email—executes as expected. 🧠 How It Works: Workflow Overview User Interaction via Form Users select product categories and enter their email. This triggers the workflow. Data Extraction via Bright Data Bright Data scrapes the MediaMarkt offers page and returns HTML content. HTML Parsing Key elements like product names, prices, and links are extracted for processing. GPT-4o-mini Recommendation Generation The extracted data is sent to OpenAI (GPT-4o-mini), which filters, ranks, and enhances deals based on the user’s preferences. Data Structuring & Split The result is split into individual deal items to be formatted. HTML Document Creation Document Generator populates a clean HTML template with the top recommended deals. Email Delivery The final document is emailed via SMTP to the user with a friendly message. 📨 Final Output Users receive a custom HTML email featuring a curated list of top MediaMarkt deals based on their selected categories. 🔐 Credentials Used Bright Data API** – Web scraping with proxy support OpenAI API** – Generating personalized recommendations SMTP** – Sending personalized deal emails ✨ Customization Tips Change the Data Source**: You can adapt this to scrape other e-commerce sites. Update the Email Template**: Make it match your branding or include images. Extend the Form**: Add preferences like price range or specific brands. Add Scheduling**: Use Cron to run the workflow daily or weekly. ❓Questions? Template and node created by Miquel Colomer and n8nhackers.com. Need help customizing or deploying? Contact us for consulting and support.
by bangank36
Overview This workflow retrieves all blog and event collection items from a Squarespace site and saves them into a Google Sheets spreadsheet. It uses pagination to fetch 20 items per request, ensuring all content is collected efficiently. How It Works The workflow queries your Squarespace blog and event collections. It fetches data in paginated batches (20 items per page). The retrieved data is formatted and inserted into Google Sheets. The workflow runs on demand or on a schedule, ensuring your data stays up to date. Requirements Credentials To use this template, you need: Your Squarespace collection URL Google Sheets API credentials Google Sheets Setup Use this sample Google Sheets template to get started quickly. Who Is This For? This template is designed for: Bloggers looking to manage and analyze content externally. Businesses and marketers tracking content performance. Anyone who needs an automated way to extract Squarespace blog and event data. Explore More Templates Check out my other n8n templates: 👉 n8n.io/creators/bangank36
by Robert Breen
n8n Workflow: OpenAI DALL·E 2 Image Generation & Google Drive Upload Description This n8n workflow automates the process of generating multiple AI-created images from a single prompt using OpenAI's DALL·E 2, then uploads the results directly to a Google Drive folder. It includes a loop to produce several image variations for the same prompt, making it ideal for creative projects, marketing materials, or content experimentation. Step-by-Step Setup Instructions 1. Prepare Your API Keys OpenAI API Key** Sign up or log in at https://platform.openai.com/ Go to API Keys and create a new one. Copy and store this securely — you'll need it in n8n. Google Drive API** Go to https://console.cloud.google.com/ Create a project and enable Google Drive API. Create OAuth 2.0 credentials and set the redirect URI to your n8n OAuth redirect (found in your n8n Google Drive node setup). Connect your Google account when adding credentials in n8n. 2. Workflow Nodes Overview Manual Trigger – Starts the workflow manually. Set Image Prompt – Stores the prompt text and base file name (e.g., “Make an image of an attractive woman standing in New York City”). Duplicate Rows (Code Node) – Creates multiple "runs" of the same prompt for variation. Loop Over Items – Processes each variation one at a time. Generate an image (OpenAI DALL·E 2) – Sends the prompt to OpenAI and retrieves an image. Upload to Google Drive – Saves each generated image to your chosen Google Drive folder. 3. Building the Workflow in n8n Step 1 — Manual Trigger Add a Manual Trigger node to start the workflow manually when testing. Step 2 — Set Image Prompt Add a Set node with two fields: Prompt → The image description text. Name → The base name for the saved file. Example: | Name | Value | |--------|---------------------------------------------------------------| | Prompt | Make an image of an attractive woman standing in New York City | | Name | woman-nyc | Step 3 — Duplicate Rows (Code Node) Use this JavaScript to create three copies of the prompt (run 1, run 2, run 3): const original = items[0].json; return [ { json: { ...original, run: 1 } }, { json: { ...original, run: 2 } }, { json: { ...original, run: 3 } }, ]; Step 4 — Loop Over Items Insert a Split in Batches node and set the batch size to 1. This ensures each prompt variation runs through the image generation process individually. Connect this node so it runs after the Duplicate Rows node. Step 5 — Generate Image Add the OpenAI Image Generation node and configure it as follows: Model**: dall-e-2 Prompt**: ={{ $json.Prompt }} Leave other options at their defaults unless you want to specify image size or style. Connect your OpenAI API credentials created in Step 1. This node will send the current prompt in the batch to OpenAI's DALL·E 2 model and return an AI-generated image. Step 6 — Upload to Google Drive Add a Google Drive node and configure it to store the generated image: File Name**: ={{ $('Set Image Prompt').item.json.Name }} - {{ $('Duplicate Rows').item.json.run }} Folder ID**: Select the target Google Drive folder where images should be saved. Connect your Google Drive OAuth2 API credentials. The node will upload each generated image to your chosen Google Drive location, with a unique filename for each variation. Running the Workflow Execute the workflow manually. The process will: Loop through each prompt variation. Generate an image using OpenAI DALL·E 2. Upload the image to Google Drive with a unique name. You will find all generated images in the selected Google Drive folder. Customization Tips Change the number of variations by editing the Duplicate Rows code. Adjust the prompt dynamically from other data sources like Google Sheets, webhooks, or forms. Schedule the workflow to run at specific times or trigger it via an API call. Created by Robert A. – Ynteractive Website: https://ynteractive.com Email: robert@ynteractive.com
by Jon Bungartz
How it works creates a new page in Confluence based on a page template also defined in Confluence replaces any number of placeholders with data from your workflow generic implementation for maximum flexibility Set up steps All parameters you need to change are defined in the Set node Set your Atlassian-domain Set the template id you want to use as the basis for new pages Set the target space and parent page for new pages added based on that template. 🎥 Explainer video has all the details. =) Feedback Any feedback is welcome. If you have ideas for improvements, let me know.
by n8n Team
This workflow syncs Discord scheduled events to Google Calendar. On a specified schedule, a request to Discord's API is made to get the scheduled events on a particular server. Only the events that have not been created or have recently been updated will be sent to Google Calendar. Prerequisites Discord account and Discord credentials. Google account and Google credentials. How it works Triggers off on the On schedule node. Gets the scheduled events from Discord. The IDs of the Discord scheduled events are used to get the events from Google Calendar, since the IDs are the same on creation of the Google Calendar event. We can now determine which events are new or have been updated. The new or updated events are created or updated in Google Calendar.
by Dave Long
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Using the serial number for assets, this workflow will create a ticket with the subject "Found duplicate Serial Numbers" with a list of all of the duplicate assets for a technician to review and merge. Duplicate assets causes incorrect billing (if customers are billed based on asset counts), and additional overhead when reviewing the history of assets when that history is spread across multiple instances. Note: Due to limitations of the Syncro API, automatically merging duplicate assets is not possible. How it works: Get a list of all assets in Syncro and summarize the list based on the Customer ID, Asset Type, and Asset Serial Create a new ticket listing all of the duplicate assets Set up steps: Install the Syncro RMM community node Connect a Syncro RMM account* Open the "Create a ticket" node and update the customer ID *See Syncro RMM Community Node documentation for details about how to get a Syncro API key and what permissions the Syncro API key needs
by ist00dent
This n8n template allows you to monitor hourly weather conditions in a specific city using OpenWeatherMap and log the results to a Google Sheet. It’s perfect for anyone needing periodic weather tracking—whether you're managing logistics, travel planning, or environmental monitoring. 🔧 How it works A Schedule Trigger activates the workflow every hour. The Get Weather Data from OpenWeatherMap node fetches real-time weather details using the city name you specify. An IF node checks if the weather description contains "rain" or the temperature is below a set threshold. If the condition is true, the data is formatted with city, temperature, humidity, and conditions. The Google Sheets node appends this formatted information to your designated sheet. 👤 Who is it for? This workflow is ideal for: Operations teams monitoring weather-sensitive logistics Researchers collecting climate data Developers and hobbyists learning how to connect APIs with Google Sheets 🗂️ Google Sheet Structure Your Google Sheet should have the following columns: city (string) temperature (K) (number) humidity (number) conditions (string) status (string) ⚙️ Setup Instructions Create a Google Sheet with the above columns. Set up your Google Service Account credentials in n8n. Replace the API key in the HTTP Request node with your own OpenWeatherMap credential. Specify your target city and ensure your OpenWeatherMap account is active. Adjust the frequency in the Schedule Trigger as needed (default: every hour).
by Keith Rumjahn
Who's this for? If you own a website and need to analyze your keyword rankings If you need to create a keyword report on your rankings If you want to grow your keyword positions SerpBear is an opensourced SEO tool specifically for keyword analytics. Click here to read details of how I use it Example output of A.I. Key Observations about Ranking Performance: The top-performing keyword is “Openrouter N8N” with a current position of 7 and an improving trend. Two keywords, “Best Docker Synology” and “Bitwarden Synology”, are not ranking in the top 100 and have a stable trend. Three keywords, “Obsidian Second Brain”, “AI Generated Reference Letter”, and “Actual Budget Synology”, and “N8N Workflow Generator” are not ranking well and have a declining trend. Keywords showing the most improvement: “Openrouter N8N” has an improving trend and a relatively high ranking of 7. Keywords needing attention: “Obsidian Second Brain” has a declining trend and a low ranking of 69. “AI Generated Reference Letter” has a declining trend and a low ranking of 84. “Actual Budget Synology”, “N8N Workflow Generator”, “Best Docker Synology”, and “Bitwarden Synology” are not ranking in the top 100. Use case Instead of hiring an SEO expert, I run this report weekly. It checks the keyword rankings of the past week and gives me recommendations on what to improve. How it works The workflow gathers SerpBear analytics for the past 7 days. It passes the data to openrouter.ai for A.I. analysis. Finally it saves to baserow. How to use this Input your SerpBearcredentials Enter your domain name Input your Openrouter.ai credentials Input your baserow credentials You will need to create a baserow database with columns: Date, Note, Blog Created by Rumjahn
by Ahmed Saadawi
📝 Sync MySQL Rows to Google Sheet Description: This n8n template automates the process of syncing new records from a MySQL database table into a Google Sheet, ideal for reporting, backup, or lightweight dashboards. It is designed for teams or individuals who need to periodically export new data rows from a custom database (e.g., CRM, registrations, surveys) into a structured Google Sheet for further analysis, sharing, or archiving—without duplicates. 🛠️ What This Workflow Does: Runs every 15 minutes** via a schedule trigger. Selects unsynced rows** (sync = 0) from a MySQL table (fifa25_customers). Checks if records exist** to prevent unnecessary writes. Appends records to a Google Sheet**, mapping fields like name, email, phone, gender, and more. Updates the MySQL table** to mark those rows as synced (sync = 1) to avoid reprocessing. Fully annotated using sticky notes for easier understanding and onboarding. 📋 Setup Instructions: Create or select a Google Sheet and make sure the columns match the following: id, name, phone, birthdate, email, region, gender, datatime Ensure your MySQL table (fifa25_customers) has a sync column (default = 0 for new rows). Connect your MySQL and Google Sheets credentials inside n8n. (Optional): Add custom filtering or column transformations as needed. 👤 Who Is It For? Marketers syncing leads to a spreadsheet Ops teams pulling user data from internal tools Analysts logging form submissions or customer data Anyone needing lightweight scheduled ETL from MySQL to Sheets 🔐 Credentials Required: MySQL** Google Sheets OAuth2** ✅ Best Practices Followed: Uses IF node to prevent unnecessary processing Updates source database to avoid duplicates Includes sticky notes for clarity All columns are explicitly mapped Works out-of-the-box on any n8n instance with proper creds
by Pedro Santos
🎥 Summarize YouTube Videos using SearchApi & LLM Who is this for? This workflow is ideal for content creators, students, digital marketers, educators, and researchers who want to quickly summarize YouTube videos. What problem does this workflow solve? Manually extracting important information from lengthy YouTube videos can be tedious and prone to errors. This workflow streamlines the process by automatically fetching video transcripts using SearchApi.io and producing concise, informative summaries through a summarization chain powered by any LLM provider. This allows users to quickly access crucial information without the need for manual transcription or detailed viewing. What this workflow does Fetches the complete transcript of a YouTube video using SearchApi. Combines the retrieved transcript into a single, continuous text. Utilizes a Summarization Chain with an LLM (e.g., OpenRouter models) to create a concise summary of the video content. Setup Install the SearchApi community node: Open Settings → Community Nodes inside your self‑hosted n8n instance. Fill npm Package Name with @searchapi/n8n-nodes-searchapi. Accept the risk prompt, and hit Install. It should now appear as a node when you search for it. API Configuration: Set up your SearchApi.io credentials in n8n. Add your preferred LLM provider credentials (e.g., OpenRouter API). Input Requirements: Provide the YouTube video ID (e.g., wBuULAoJxok). Connect LLM Integration: Configure the summarization chain with your chosen model and parameters for text splitting. How to customize this workflow to meet your needs Adjust the summarization model or modify text-splitter parameters to accommodate different lengths and complexities of video transcripts. Integrate additional nodes to export summaries directly into your preferred tools, such as Google Drive, Slack, or email. Customize prompt templates in the summarization chain to obtain various summary styles (bullet points, paragraphs, etc.). Modify the trigger to suit your workflow. Example Usage Input: YouTube video ID (wBuULAoJxok). Output: A concise, actionable summary that highlights key ideas, recommendations, and insights from the video.
by Lucas Peyrin
How it works This template launches your very first AI Agent —an AI-powered chatbot that can do more than just talk— it can take action using tools. Think of an AI Agent as a smart assistant, and the tools are the apps on its phone. By connecting it to other nodes, you give your agent the ability to interact with real-world data and services, like checking the weather, fetching news, or even sending emails on your behalf. This workflow is designed to be the perfect starting point: The Chat Interface:** A Chat Trigger node provides a simple, clean interface for you to talk to your agent. The Brains:** The AI Agent node receives your messages, intelligently decides which tool to use (if any), and formulates a helpful response. Its personality and instructions are fully customizable in the "System Message". The Language Model:* It uses *Google Gemini** to power its reasoning and conversation skills. The Tools:** It comes pre-equipped with two tools to demonstrate its capabilities: Get Weather: Fetches real-time weather forecasts. Get News: Reads any RSS feed to get the latest headlines. The Memory:** A Conversation Memory node allows the agent to remember the last few messages, enabling natural, follow-up conversations. Set up steps Setup time: ~2 minutes You only need one thing to get started: a free Google AI API key. Get Your Google AI API Key: Visit Google AI Studio at aistudio.google.com/app/apikey. Click "Create API key in new project" and copy the key that appears. Add Your Credential in n8n: On the workflow canvas, go to the Connect your model (Google Gemini) node. Click the Credential dropdown and select + Create New Credential. Paste your API key into the API Key field and click Save. Start Chatting! Go to the Example Chat node. Click the "Open Chat" button in its parameter panel. Try asking it one of the example questions, like: "What's the weather in Paris?" or "Get me the latest tech news." That's it! You now have a fully functional AI Agent. Try adding more tools (like Gmail or Google Calendar) to make it even more powerful.
by InfraNodus
Using the knowledge graphs instead of RAG vector stores This workflow creates an AI chatbot agent that has access to several knowledge bases at the same time (used as "experts"). These knowledge bases are provided using the InfraNodus GraphRAG using the knowledge graphs and providing high-quality responses without the need to set up complex RAG vector store workflows. The advantages of using GraphRAG instead of the standard vector stores for knowledge are: Easy and quick to set up (no complex data import workflows needed) A knowledge graph has a holistic view of your knowledge base Better retrieval of relations between the document chunks = higher quality responses How it works This template uses the n8n AI agent node as an orchestrating agent that decides which tool (knowledge graph) to use based on the user's prompt. Here's a description step by step: The user submits a question using the AI chatbot (n8n interface, in this case, which can be accessed via a URL or embedded to any website) The AI agent node checks a list of tools it has access to. Each tool has a description of the knowledge it has auto-generated by InfraNodus. The AI agent decides which tool should be used to generate a response. It may reformulate user's query to be more suitable for the expert. The query is then sent to the InfraNodus HTTP node endpoint, which will query the graph that corresponds to that expert. Each InfraNodus GraphRAG expert provides a rich response that takes the whole context into account and provides a response from each expert (graph) along with a list of relevant statements retrieved using a combination or RAG and GraphRAG. The n8n AI Agent node integrates the responses received from the experts to produce the final answer. The final answer is sent back to the user's chat (or a webhook endpoint) How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow Requirements An InfraNodus account and API key An OpenAI (or any other LLM) API key Customizing this workflow You can use this same workflow with a Telegram bot, so you can interact with it using Telegram. There are many more customizations available. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20174217658396-Using-InfraNodus-Knowledge-Graphs-as-Experts-for-AI-Chatbot-Agents-in-n8n Also check out the video tutorial with a demo: