by Risper
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How It Works This n8n workflow automatically discovers high-quality business leads from Reddit posts by analysing posts across targeted subreddits. Loads your business profile from a connected Google Sheet. Uses AI to identify relevant subreddits where your potential customers engage. Generates intent-based Reddit search queries based on your services, keywords, and client pain points. Searches Reddit in real time using the generated queries. Classifies posts based on whether they show lead potential. Analyses high-potential posts for service-fit, urgency, and estimated value. Filters and scores leads to prioritize high-conversion opportunities. Saves the most promising leads to a dedicated Google Sheet. Sends Slack alerts to notify your sales team for immediate follow-up. Requirements Before using this workflow, ensure the following services are connected and configured: Google Sheets (OAuth2): Reads your business profile and writes qualified leads Reddit (OAuth2) Perform Reddit post searches based on generated queries Google Gemini API Analyse posts, generate queries, and extract insights Slack API : Notify your team with qualified lead summaries Google Sheets Setup You will need two Google Sheets: Business Profile Sheet (Input) This sheet contains a single row describing your service business. The workflow reads this to generate relevant subreddit selections and search queries. Required Fields (as headers in row 1): profession industry primary_services service_keywords target_client_profile pain_points intent_signals urgency_indicators price_range Reddit Leads Sheet (Output) This sheet stores high-quality Reddit posts identified as potential leads. The workflow appends or updates rows based on post_id to avoid duplication. Expected Columns: post_id post_url post_title post_post post_subreddit post_date
by RealSimple Solutions
Who Is This For? This workflow is designed for AI engineers, automation specialists, and content creators who need a scalable system to dynamically manage prompts stored in GitHub. It eliminates manual updates, enforces required variable checks, and ensures that AI interactions always receive fully processed prompts. 🚀 What Problem Does This Solve? Manually managing AI prompts can be inefficient and error-prone. This workflow: ✅ Fetches dynamic prompts from GitHub ✅ Auto-populates placeholders with values from the setVars node ✅ Ensures all required variables are present before execution ✅ Processes the formatted prompt through an AI agent 🛠 How This Workflow Works This workflow consists of three key branches, ensuring smooth prompt retrieval, variable validation, and AI processing. 1️⃣ Retrieve the Prompt from GitHub (HTTP Request → Extract from File → SetPrompt) The workflow starts manually or via an external trigger. It fetches a text-based prompt stored in a GitHub repository. The Extract from File Node retrieves the content from the GitHub file. The SetPrompt Node stores the prompt, making it accessible for processing. 📌 Note: The prompt must contain n8n expression format variables (e.g., {{ $json.company }}) so they can be dynamically replaced. 2️⃣ Extract & Auto-Populate Variables (Check All Prompt Vars → Replace Variables) A Code Node scans the prompt for placeholders in the n8n expression format ({{ $json.variableName }}). The workflow compares required variables against the setVars node: ✅ If all variables are present, it proceeds to variable replacement. ❌ If any variables are missing, the workflow stops and returns an error listing them. The Replace Variables Node replaces all placeholders with values from setVars. 📌 Example of a properly formatted GitHub prompt: Hello {{ $json.company }}, your product {{ $json.features }} launches on {{ $json.launch_date }}. This ensures seamless replacement when processed in n8n. 3️⃣ AI Processing & Output (AI Agent → Prompt Output) The Set Completed Prompt Node stores the final, processed prompt. The AI Agent Node (Ollama Chat Model) processes the prompt. The Prompt Output Node returns the fully formatted response. 📌 Optional: Modify this to use OpenAI, Claude, or other AI models. ⚠️ Error Handling: Missing Variables If a required variable is missing, the workflow stops execution and provides an error message: ⚠️ Missing Required Variables: ["launch_date"] This ensures no incomplete prompts are sent to AI agents. ✅ Example Use Case 📜 GitHub Prompt File (Using n8n Expressions) Hello {{ $json.company }}, your product {{ $json.features }} launches on {{ $json.launch_date }}. 🔹 Variables in setVars Node { "company": "PropTechPro", "features": "AI-powered Property Management", "launch_date": "March 15, 2025" } ✅ Successful Output Hello PropTechPro, your product AI-powered Property Management launches on March 15, 2025. 🚨 Error Output (If Missing launch_date) ⚠️ Missing Required Variables: ["launch_date"] 🔧 Setup Instructions 1️⃣ Connect Your GitHub Repository Store your prompt in a public or private GitHub repo. The workflow will fetch the raw file using the GitHub API. 2️⃣ Configure the SetVars Node Define the required variables in the SetVars Node. Make sure the variable names match those used in the prompt. 3️⃣ Test & Run Click Test Workflow to execute. If variables are missing, it will show an error. If everything is correct, it will output the fully formatted prompt. ⚡ How to Customize This Workflow 💡 Need CRM or Database Integration? Connect the setVars node to an Airtable, Google Sheets, or HubSpot API to pull variables dynamically. 💡 Want to Modify the AI Model? Replace the Ollama Chat Model with OpenAI, Claude, or a custom LLM endpoint. 📌 Why Use This Workflow? ✅ No Manual Updates Required – Fetches prompts dynamically from GitHub. ✅ Prevents Broken Prompts – Ensures required variables exist before execution. ✅ Works for Any Use Case – Handles AI chat prompts, marketing messages, and chatbot scripts. ✅ Compatible with All n8n Deployments – Works on Cloud, Self-Hosted, and Desktop versions.
by Belgacem Dhiflaoui
Description What Problem Does This Solve? 🛠️ This workflow automates the process of extracting key information from resumes received as email attachments and storing that data in a structured format within a Supabase database. It eliminates the manual effort of reviewing each resume, identifying relevant details, and entering them into a database. This streamlines the hiring process, making it faster and more efficient for recruiters and HR professionals. Target audience: Recruiters, HR departments, and talent acquisition teams. What Does It Do? 🌟 Monitors a designated email inbox for new messages with resume attachments. Extracts key information such as name, contact details, education, work experience, and skills from the attached resumes. Cleans and formats the extracted data. Stores the processed data securely in a Supabase database. Key Features 📋 Automatic email monitoring for resume attachments. Intelligent data extraction from various resume formats (e.g., PDF, DOC, DOCX). Customizable data fields to capture specific information. Seamless integration with Supabase for data storage. Uses OpenRouter to streamline API key management for services such as AI-powered parsing. Setup Instructions Prerequisites ⚙️ n8n Instance**: Self-hosted or cloud instance of n8n. Email Account**: Gmail account with Gmail API access for receiving resumes. Supabase Account**: A Supabase project with a database/table ready to store extracted resume data. You'll need the Supabase URL and API key. OpenRouter Account**: For managing AI model API keys centrally when using LLM-based resume parsing. Installation Steps 📦 1. Import the Workflow: Copy the exported workflow JSON. Import it into your n8n instance via “Import from File” or “Import from URL”. 2. Configure Credentials: In n8n > Credentials, add credentials for: Email account (Gmail API): Provide Client ID and Client Secret from the Google Cloud Platform. Supabase: Provide the Supabase URL and the anon public API key. OpenRouter (Optional): Add your OpenRouter API key for use with any AI-powered resume parsing nodes. Assign these credentials to their respective nodes: Gmail Trigger → Email credentials. Supabase Insert → Supabase credentials. AI Parsing Node → OpenRouter credentials. 3. Set Up Supabase Table: Create a table in Supabase with columns such as: name, email, phone, education, experience, skills, received_date, etc. Make sure the field names align with the structure used in your workflow. 4. Customize Nodes: Parsing Node(s):* Modify the workflow to use an *OpenAI model* directly for field extraction, replacing the *Basic LLM Chain** node that utilizes OpenRouter. 5. Test the Workflow: Send a test email with a resume attachment. Check n8n's execution log to confirm the workflow triggered, parsed the data, and inserted it into Supabase. Verify data integrity in your Supabase table. How It Works High-Level Workflow 🔍 Email Monitoring: Triggered when a new email with an attachment is received (via Gmail API). Attachment Check: Verifies the email contains at least one attachment. Prepare Data: Extracts the attachment and prepares it for analysis. Data Extraction: Uses OpenRouter-powered LLM (if configured) to extract structured information from the resume. Data Storage: The structured information is saved into the Supabase database. Node Names and Actions (Example) Gmail Trigger:** Triggers when a new email is received. IF**: Checks whether the received email includes any attachments. Get Attachments:** Retrieves attachments from the triggering email. Prepare Data:** Prepares the attachment content for processing. Basic LLM Chain:** Uses an AI model via OpenRouter to extract relevant resume data and returns it as structured fields. Supabase-Insert:** Inserts the structured resume data into your Supabase database.
by Romain Jouhannet
Linear Project/Issue Status and End Date to Productboard feature Sync Sync project and issue data between Linear and Productboard to keep teams aligned. This workflow updates Productboard features with the status and end date from Linear projects or due date from Linear issues. It ensures consistent data and sends a Slack notification whenever changes are made. Features Listens for updates in Linear projects/issues. Maps Linear statuses to Productboard feature statuses. Updates Productboard feature details including timeframe. Sends a Slack notification summarizing the updates. Setup Linear Credentials: Add your Linear API credentials in n8n. Productboard Credentials: Configure the Productboard API credentials in n8n. Linear Projects or Issues: Select the Linear project(s) or Issue(s) you want to monitor for updates. Productboard Custom Field: Create a custom field in Productboard named "Linear". This field should store the URL of the Linear project or issue you want to sync. Retrieve the UUID of the custom field in Productboard and set it up in the "Get Productboard Feature ID" node. Slack Notification: Update the Slack node with the desired Slack channel ID. Activate the Workflow: Enable the workflow to automatically sync data when triggered by updates in Linear.
by Lucas Peyrin
How it works This template is a complete, hands-on tutorial that lets you build and interact with your very first AI Agent. Think of an AI Agent as a standard AI chatbot with superpowers. The agent doesn't just talk; it can use tools to perform actions and find information in real-time. This workflow is designed to show you exactly how that works. The Chat Interface (Chat Trigger): This is your window to the agent. It's a fully styled, public-facing chat window where you can have a conversation. The Brain (AI Agent Node): This is the core of the operation. It takes your message, understands your intent, and intelligently decides which "superpower" (or tool) it needs to use to answer your request. The agent's personality and instructions are defined in its extensive system prompt. The Tools (Tool Nodes): These are the agent's superpowers. We've included a variety of useful and fun tools to showcase its capabilities: Get a random joke. Search Wikipedia for a summary of any topic. Calculate a future date. Generate a secure password. Calculate a monthly loan payment. Fetch the latest articles from the n8n blog. The Memory (Memory Node): This gives the agent a short-term memory, allowing it to remember the last few messages in your conversation for better context. When you send a message, the agent's brain analyzes it, picks the right tool for the job, executes it, and then formulates a helpful response based on the tool's output. Set up steps Setup time: ~3 minutes This template is nearly ready to go out of the box. You just need to provide the AI's "brain." Configure Credentials: This workflow requires an API key for an AI model. Make sure you have credentials set up in your n8n instance for either Google AI (Gemini) or OpenAI. Choose Your AI Brain (LLM): By default, the workflow uses the Google Gemini node. If you have Google AI credentials, you're all set! If you prefer to use OpenAI, simply disable the Gemini node and enable the OpenAI node. You only need one active LLM node. Make sure it is connected to the Agent parent node. Explore the Tools: Take a moment to look at the different tool nodes connected to the Your First AI Agent node. This is where the agent gets its abilities! You can add, remove, or modify these to create your own custom agent. Activate and Test! Activate the workflow. Open the public URL for the Example Chat Window node (you can copy it from the node's panel). Start chatting! Try asking it things like: "Tell me a joke." "What is n8n?" "Generate a 16-character password for me." "What are the latest posts on the n8n blog?" "What is the monthly payment for a $300,000 loan at 5% interest over 30 years?"
by Yaron Been
🔍 Competitor Review Scraper & Ad Copy Generator (Trustpilot + Bright Data + GPT-4o-mini) 📌 Who It's For Marketers, business owners, and agencies looking to: Analyze competitor pain points Generate high-impact Facebook ad copy Automate manual data processing 🧩 How It Works This n8n-based workflow combines Bright Data, Google Sheets, and OpenAI to scrape, process, and transform Trustpilot reviews into ready-to-use ad copy. 🔹 Step-by-Step Breakdown Trigger (Manual Form Submission) Input required: Competitor’s Trustpilot URL Review timeframe (30d, 3m, 6m, 12m) Fetch Reviews Calls Bright Data’s Dataset API with URL & timeframe Polls until snapshot is ready Retrieve & Store Extracts all reviews Saves them into a structured Google Sheet Filter & Aggregate Filters to only 1–2 star reviews Summarizes common negative feedback Generate Ad Copy Sends the summary to OpenAI GPT-4o-mini Produces 3 variations of ad copy targeting pain points Distribute Insights Sends ad copy + summary via email to the marketing team ✅ Requirements -LLM Account -Google Sheets - Copy this sheet: https://docs.google.com/spreadsheets/d/1Zi758ds2_aWzvbDYqwuGiQNaurLgs-leS9wjLWWlbUU/edit?gid=0#gid=0 -Bright Data account ⚙️ Setup Instructions **Step 1: Google Sheets ** Copy this Google Sheets template Do not change column headers **Step 2: n8n Credential Setup ** Google Sheets: OAuth2 Bright Data: Authorization Header OpenAI: API Key for GPT-4o-mini **Step 3: Import Workflow ** Import the .json file into n8n Configure your sheet + dataset ID Adjust GPT prompts as needed **Step 4: Run the Workflow ** Trigger via form Receive ad copy + review insights via email 🧠 Tips & Best Practices Bright Data snapshots may take time — polling is handled Focusing on 1–2 star reviews yields the most actionable pain points You can customize GPT-4o-mini prompts for tone or vertical 💬 Support & Feedback Need help or customization? 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen 📚 Bright Data Docs: docs.brightdata.com/introduction
by Gavin
This Template gives the ability to monitor all uplinks for your Meraki Dashboard and then alert your team in a method you prefer. This example is a Teams notification to our Dispatch Channel Setup will probably take around 30 minutes to 1h provided with the Template. Most time intensive steps are getting a Meraki API key which I go over and setting up the Teams node which n8n has good documentation for. Tutorial & explanation https://www.youtube.com/watch?v=JvaN0dNwRNU
by Blockia Labs
Time Logging on Clockify Using Slack How it works This workflow simplifies time tracking for teams and agencies by integrating Slack with Clockify. It enables users to log, update, or delete time entries directly within Slack, leveraging an AI-powered assistant for seamless and conversational interactions. Key features include: Effortless Time Logging**: Create and manage time entries in Clockify without leaving Slack. AI-Powered Assistant**: Get step-by-step guidance to ensure accurate and efficient time logging. Project and Client Management**: Retrieve project and client information from Clockify effortlessly. Overlap Prevention**: Avoid overlapping entries with built-in time validation. Automated Descriptions**: Generate ethical, grammatically correct descriptions for time logs. Set up steps 1. Prepare your integrations Ensure you have active accounts for both Slack and Clockify. Generate your Clockify API credentials for integration. 2. Import the workflow Download and import the workflow template into your n8n instance. Configure the workflow to connect with your Slack and Clockify accounts. 3. Configure the workflow Add your Clockify API credentials in the workflow settings. Set up the Slack Trigger to listen for app mentions or specific commands. 4. Test the workflow Use Slack to create a time entry and verify it in Clockify. Test updating and deleting existing entries to ensure smooth functionality. Check for any overlapping time logs or incorrect data entries. Why use this workflow? Efficiency**: Eliminate the need to switch between tools for time tracking. Accuracy**: AI-driven validation ensures error-free entries. Automation**: Simplify repetitive tasks like updating or deleting time logs. Proactive Guidance**: Conversational assistant ensures smooth operations.
by Alex Huy
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This n8n workflow automatically scrapes Airbnb listings from a specified location and saves the data to a Google Sheet. It performs pagination to collect listings across multiple pages, extracts detailed information for each property, and organizes the data in a structured format for easy analysis. How it Works The workflow operates through these high-level steps: Search Initialization: Starts with an Airbnb search for a specific location (London) with defined check-in/check-out dates and guest count Pagination Loop: Automatically processes multiple pages of search results using cursor-based pagination Data Extraction: Parses listing information including names, prices, ratings, reviews, and URLs Detail Enhancement: Fetches additional details for each listing (house rules, highlights, descriptions, amenities) Data Storage: Saves all collected data to a Google Sheet with proper formatting Loop Control: Continues until reaching the page limit (2 pages) or no more results are available Setup Steps Prerequisites n8n instance with MCP (Model Context Protocol) support Google Sheets API credentials configured Airbnb MCP client properly set up Configuration Steps Configure MCP Client Set up the Airbnb MCP client with credential ID: Ensure the client has access to airbnb_search and airbnb_listing_details tools Google Sheets Setup Create a Google Sheet with ID: 15IOJquaQ8CBtFilmFTuW8UFijux10NwSVzStyNJ1MsA Configure Google Sheets OAuth2 credentials (ID: 6YhBlgb8cXMN3Ra2) Ensure the sheet has these column headers: "id, name, url, price_per_night, total_price, price_details beds_rooms, rating, reviews, badge, location houseRules, highlights, description, amenities" Search Parameters Location: "London" (can be modified in the "Airbnb Search" node) Adults: 7 Children: 1 Check-in: "2025-08-14" Check-out: "2025-08-17" Page limit: 2 (can be adjusted in the "If1" condition node) Execution Use the manual trigger "When clicking 'Execute workflow'" to start the process Monitor the workflow execution through the n8n interface Check the Google Sheet for populated data after completion Key Features Automatic Pagination: Processes multiple pages without manual intervention Comprehensive Data: Extracts both basic listing info and detailed property information Error Handling: Includes JSON parsing error handling and data validation Batch Processing: Uses split batches for efficient processing of individual listings Real-time Updates: Appends new data to existing Google Sheet records Output Data Structure Each listing contains: Basic info: ID, name, URL, pricing details, room/bed count Ratings: Average rating and review count Location: Latitude and longitude coordinates Enhanced details: House rules, highlights, descriptions, amenities Metadata: Page number, check-in/out dates, badges
by Hubschrauber
Overview This template integrates an IOT multi-button switch (meant for controlling a dimmable light) with Spotify playback functions, via MQTT messages. This isn't likely to work without some tinkering, but should be a good head start on receiving/routing IOT/MQTT messages and hooking up to a Spotify-like API. Requirements An IOT device capable of generating events that can be delivered as MQTT messages through an MQTT Broker e.g. Ikea Strybar remote An MQTT Broker to which n8n can connect and consume messages e.g. Zigbee2MQTT in HomeAssistant A Spotify developer-account (which provides access to API functions via OAuth2 authorization) A Spotify user-account (which provides access to Spotify streamed content, user settings, etc.) Setup Create an MQTT Credential item in n8n and assign it to the MQTT Trigger node Modify the MQTT trigger node to match the topic for your IOT device messages Modify the switch/router nodes to map to the message text from your IOT button (e.g. arrow_left_click, brightness_up_click, etc.) Create a Spotify developer-account (or use the login for a user-account) Create an "App" in the developer-account to represent the n8n workflow Chicken/Egg ALERT: The n8n Spotify Credentials dialog box will display the "OAuth Redirect URL" required to create the App in Spotify, but the n8n Credential item itself cannot be created until AFTER the App has been created. Create a Spotify Credentials item in n8n Open the Settings on the Spotify App to find the required Client ID and Client Secret information. ALERT: Save this before proceeding to the Connect step. Connect the n8n Spotify Credential item to the Spotify user-account ALERT: Expect n8n to open a separate OAuth2 window on authorization.spotify.com here, which may require a login to the Spotify user-account Open each of the HTTP and Spotify nodes, one by one, and re-assign to your Spotify Credential (try not to miss any). (Then, probably, upvote this feature request: https://community.n8n.io/t/select-credentials-via-expression/5150 Modify the variable values in the Globals node to match your own environment. target_spotify_playback_device_name - The name of a playback device available to the Spotify user-account favorite_playlist_name - The name of a playlist to start when one of the button actions is indicated in the MQTT message. Used in example "Custom Function 2" sequence. Notes You're on your own for getting the multi-button remote switch talking to MQTT, figuring out what the exact MQTT topic name is, and mapping the message parts to the workflow (actions, etc.). The next / previous actions are wired up to not transfer control to the target device. This alternative routing just illustrates a different behavior than the remaining actions/functions, which include activation of the target device when required. Some of the Spotify API interactions use the Spotify node in n8n, but many of the available Spotify API functions are limited or not implemented at all in the Spotify node. So, in other cases, a regular HTTP node is used with the Spotify OAuth2 API credential instead. By modifying one of the examples included in the template, it should be possible to call nearly anything the Spotify API has to offer. Spotify+n8n OAuth Mini-Tutorial Definitions The developer-account is the Spotify login for creating a spotify-app which will be associated with a client id and client secret. The user-account is the Spotify login that has permission to stream songs, set up playback devices, etc. ++A spotify-login allows access to a Spotify user-account, or a Spotify developer-account, OR BOTH++ The spotify-app, which has a client id and client secret, is an object created in the developer-account. The app-implementation (in this case, an ++n8n workflow++) uses the spotify-app's credentials (client id / client secret) to call Spotify API endpoints on behalf of a user-account. Using One Spotify Login as Both User and Developer When an n8n Spotify-node or HTTP-node (i.e. an app-implementation) calls a Spotify API endpoint, the Credentials item may be using the client id and client secret from a spotify-app, which was created in a developer-account that is ++one and the same spotify-login as the user-account++. However, it helps to remind yourself that from the Spotify API server's perspective, the developer-account + spotify-app, and the user-account, are ++two independent entities++. n8n Spotify-OAuth2-API Credential Authorization Process The 2 layers/steps, in the process of authorizing an n8n Spotify-OAuth2-API credential to make API calls, are: n8n must identify itself to Spotify as the app-implementation associated with the developer-account/spotify-app by sending the app's credentials (client id and client secret) to Spotify. The Client ID and Client Secret are supplied in the n8n Spotify OAuth2 Credentials UI/dialog-box Separately, n8n must obtain an authorization token from Spotify to represent the permissions granted by the user to execute actions (call API endpoints) on behalf of the user (i.e. access things that belong to the user-account). This authorization for the user-account access is obtained when the "Connect" or "Reconnect" button is clicked in the n8n Spotify Credentials UI/dialog-box (which pops up a separate authorization UI/browser-window managed by Spotify). The Authorization for a given spotify-app stays "registered" in the user-account until revoked. See: https://support.spotify.com/us/article/spotify-on-other-apps/ Direct Link: https://www.spotify.com/account/apps/ More than one user-account can be authorized for a given spotify-app. A particular n8n Spotify-OAuth2-API credential item appears to cache an authorization token for the user-account that was most recently authorized. Up to 25 users can be allowed access to a spotify-app in Developer-Mode, but any user-account other than the one associated with the developer-account must be added by email address at https://developer.spotify.com/dashboard/{{app-credential-id}}/users ALERT: IF the browser running the n8n UI is ALSO logged into a Spotify account, and the spotify-app is already authorized for that Spotify account, the "reconnect" button in the Spotify-OAuth2-API credential dialog may automatically grab a token for that logged in user-account, offering no opportunity to select a different user-account. This can be managed somewhat by using "incognito" browser windows for n8n, Spotify, or both. References n8n Spotify Credentials Docs Spotify Authorization Docs
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for Recipe Recommendation Engine with Bright Data MCP & OpenAI is a powerful automated workflow combines Bright Data's MCP for scraping trending or regional recipe data with OpenAI 4o mini to generate personalized recipe recommendations. This automated workflow is designed for: Food Bloggers & Culinary Creators : Who want to automate the extraction and curation of recipes from across the web to generate content, compile cookbooks, or publish newsletters. Nutritionists & Health Coaches : Who need structured recipe data to analyze ingredients, calories, and nutrition for personalized meal planning or dietary tracking. AI/ML Engineers & Data Scientists : Building models that classify cuisines, predict recipes from ingredients, or generate dynamic meal suggestions using clean, structured datasets. Grocery & Meal Kit Platforms : Who aim to extract recipes to power recommendation engines, ingredient lists, or personalized meal plans. Recipe Aggregator Startups : Looking to scale recipe data collection, filtering, and standardization across diverse cooking websites with minimal human intervention. Developers Integrating Cooking Features : Into apps or digital assistants that offer recipe recommendations, step-by-step cooking instructions, or nutritional insights. What problem is this workflow solving? This workflow solves: Automated recipe data extraction from any public URL AI-driven structured data extraction Scalable looped crawling and processing Real-time notifications and data persistence What this workflow does 1. Set Recipe Extract URL Configure the recipe website URL in the input node Set your Bright Data zone name and authentication 2. Paginated Data Extract Triggers a paginated extraction across multiple pages (recipe listing, index, or search pages) Returns a list of recipe links for processing 3. Loop Over Items Loops through the array of recipe links Each link is passed individually to the scraping engine 4. Bright Data MCP Client (Per Recipe) Scrapes each individual recipe page using scrape_as_html Smartly bypasses common anti-bot protections via Bright Data Web Unlocker 5. Structured Recipe Data Extract (via OpenAI GPT-4o mini) Converts raw HTML to clean text using an LLM preprocessing node Uses OpenAI GPT-4o mini to extract structured data 6. Webhook Notification Pushes the structured recipe data to your configured webhook endpoint Format: JSON payload, ideal for Slack, internal APIs, or dashboards 7. Save Response to Disk Saves the structured recipe JSON information to local file system Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the OpenAi account credentials. Make sure to set the fields as part of Set the Recipe Extract URL. Remember to set the webhook_url to send a webhook notification of recipe response. Set the desired local path in the Write the structured content to disk node to save the recipe response. How to customize this workflow to your needs You can tailor the Recipe Recommendation Engine workflow to better fit your specific use case by modifying the following key components: 1. Input Fields Node Update the Recipe URL to target specific cuisine sites or recipe types (e.g., vegan, keto, regional dishes). 2. LLM Configuration Swap out the OpenAI GPT-4o mini model with another provider (like Google Gemini) if you prefer. Modify the structured data prompt to extract custom fields that you wish. 3. Webhook Notification Configure the Webhook Notification node to point to your preferred integration (e.g., Slack, Discord, internal APIs). 4. Storage Destination Change the Save to Disk node to store the structured recipe data in: A cloud bucket (S3, GCS, Azure Blob etc.) A database (MongoDB, PostgreSQL, Firestore) Google Sheets or Airtable for spreadsheet-style access.
by InfraNodus
This template can be used to find the content gaps in your competitors' discourse: identifying the topics they are not yet connecting and giving you an opportunity to fill in this gap with your content and product ideas. It will also generate research questions that will help bridge the gaps and generate new ideas. The template showcases the use of multiple n8n nodes and processes: enriching Google sheets file with the new data data extraction content enhancement using GraphRAG approach content gap / research question generation This approach can be very useful for research, marketing, and SEO applications as you can quickly get an overview of the main topics that are available online for a certain niche and understand what is missing. What are Content Gaps in Marketing and SEO? In the context of SEO, content gaps are usually understood as the topics that your competitors rank for but you do not. However, it's hard to rank for these topics because there's very high competition. So a much more effective way is to identify the gaps between the topics your competitors are talking about that are not yet bridged in their discourse. If you address these gaps in your content, you will increase the informational gain that your content offers and also offer a novel perspective while touching upon the topics that are relevant in your field. For example, if we analyze the top websites for "body and physical practices, fitness, etc." we will see that most of them are talking about the health and fitness aspects and another big topic is the community aspect. However, there is a gap between the two topics: which means that most of the websites (companies) that talk about this topic don't mention the two in the same context. This might be an opportunity: bridging the gap between health, fitness but also emphasizing the community aspect that comes with a collective practice. How it works This template consists of the two stages: 1) Data enrichment of a Google sheet file with a list of your competitors using InfraNodus' GraphRAG to generate topical summaries and graph summaries for every URL you're analyzing. 2) Insight generation (using InfraNodus to identify the main topical clusters and gaps in those summaries, these insights are then added to the Google sheet file. Additionally, it contains a sub workflow that you can activate and launch to ask Perplexity model to conduct a market research and find the companies that operate in your field and populate the original Google sheet file. Here's a description step by step: Step 0: Populate the Google sheets file with the company data (either manually or using the sub-workflow provided or Manus AI / Deep Research) Steps 1-2: Triggering and Launching the workflow, extracting the company URL from the Google sheet row Step 3: Scraping the url content from the companies' websites and cleaning the data Steps 5-7: Use InfraNodus GraphRAG Content Enhancer to get a topical summary and graph summary. This is what you're going to get: Steps 8-10: Use InfraNodus AI to generate insight advice and research questions based on the content gaps How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Create a separate knowledge graph for each expert (using PDF / content import options) in InfraNodus For each graph, go to the workflow, paste the name of the graph into the body name field. Keep other settings intact or learn more about them at the InfraNodus access points page. Once you add one or more graphs as experts to your flow, add the LLM key to the OpenAI node and launch the workflow Requirements An InfraNodus account and API key A Google Sheet account and an authorization key Note: OpenAI key is not required. But you might want to get a Perplexity AI key if you'd like to use the sub-workflow that populates the Google sheet with your competitors' website addresses (if you don't have this list yet). Customizing this workflow You can use this same workflow with a Telegram bot or Slack (to be notified of the summaries and ideas). You can also hook up automated social media content creation workflows in the end of this template, so you can generate posts that are relevant (covering the important topics in your niche) but also novel (because they connect them in a new way). Check out our n8n templates for ideas at https://n8n.io/creators/infranodus/ Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20234254556828-Find-Content-Gaps-in-Websites-Market-Research-and-SEO-n8n-Workflow Also check the full tutorial with a conceptual explanation at https://support.noduslabs.com/hc/en-us/articles/20454382597916-Beat-Your-Competition-Target-Their-Content-Gaps-with-this-n8n-Automation-Workflow Also check out the video tutorial with a demo: For support and help with this workflow, please, contact us at https://support.noduslabs.com