by Oneclick AI Squad
In this guide, we’ll walk you through setting up a smart workflow that triggers on new restaurant orders, extracts and formats customer and dish details from Google Sheets, uses Gemini AI to recommend dishes or offers, and sends suggestions via Telegram. Ready to automate your order processing and enhance customer experience? Let’s dive in! What’s the Goal? Automatically trigger the workflow when a new order is placed. Extract and format customer information and order details from Google Sheets. Use Gemini AI to analyze orders and recommend dishes or offers. Send personalized suggestions to customers via Telegram. Enable real-time order processing and customer engagement. By the end, you’ll have a smart system that processes orders and suggests items effortlessly. Why Does It Matter? Manual order processing and suggestion generation are inefficient and miss opportunities. Here’s why this workflow is a game changer: Real-Time Efficiency**: Instantly process orders and suggest items. Personalized Engagement**: AI-driven suggestions enhance customer satisfaction. Time-Saving Automation**: Reduce manual effort in order management. Improved Sales**: Targeted recommendations can boost order value. Think of it as your intelligent assistant for orders and customer delight. How It Works Here’s the step-by-step magic behind the automation: Step 1: New Order Trigger Trigger the workflow when a new order is detected (e.g., via a form submission). Step 2: Extract & Format Order Extract and format dish ordering details from the customer order details sheet for further processing. Step 3: Save Customer Info Save customer information (e.g., ID, name, mobile number) from the customer details sheet. Step 4: Save Dish Info Save dish details (e.g., name, quantity, price) from the customer order details sheet. Step 5: Prepare Dish Details for AI Prepare the dish details for AI analysis to generate recommendations. Step 6: Clean Data for Input to Improve AI Understanding Clean and structure the data to enhance AI comprehension. Step 7: Use Gemini AI to Recommend Dishes or Offers Utilize Gemini AI (via Google Chat Model and Think Tool) to recommend dishes or offers based on order data. Step 8: Format AI Suggestions Format the AI-generated suggestions into a Telegram-friendly message. Step 9: Send Suggestions via Telegram Send the formatted suggestions directly to the customer via Telegram. How to Use the Workflow? Importing a workflow in n8n is a straightforward process that allows you to use pre-built workflows to save time. Below is a step-by-step guide to importing the Smart Restaurant Order & Suggestion System workflow in n8n. Steps to Import a Workflow in n8n Obtain the Workflow JSON Source the Workflow: Workflows are shared as JSON files or code snippets, e.g., from the n8n community, a colleague, or exported from another n8n instance. Format: Ensure you have the workflow in JSON format, either as a file (e.g., workflow.json) or copied text. Access the n8n Workflow Editor Log in to n8n (via n8n Cloud or self-hosted instance). Navigate to the Workflows tab in the n8n dashboard. Click Add Workflow to create a blank workflow. Import the Workflow Option 1: Import via JSON Code (Clipboard): Click the three dots (⋯) in the top-right corner to open the menu. Select Import from Clipboard. Paste the JSON code into the text box. Click Import to load the workflow. Option 2: Import via JSON File: Click the three dots (⋯) in the top-right corner. Select Import from File. Choose the .json file from your computer. Click Open to import. Setup Notes Google Sheet Columns**: Customer Details Sheet: Customer id, Customer name, Customer mobile number (e.g., CUST-JW4Z8Y, ajay, 9898989898; CUST-VEITPW, akash, 9898976898). Customer Order Details Sheet: Customer id, Dish name, Dish quantity, Per unit price, Actual price (e.g., CUST-JW4Z8Y, Tandoori Chicken, 1, 250, 250; CUST-VEITPW, Masala Dosa, 1, 150, 150). Google Sheets Credentials**: Configure OAuth2 settings in the extract and save nodes with your Google Sheet ID and credentials. Gemini AI**: Set up the Gemini AI node with Google Chat Model and Think Tool credentials. Telegram Integration**: Authorize the Send Suggestions node with Telegram API credentials and the customer’s chat ID or mobile number. Trigger Setup**: Configure the New Order Trigger node to detect new orders (e.g., via form or webhook).
by Jean-Marie Rizkallah
🧩 Jamf Smart Group Membership to Slack Automatically export Jamf smart group membership to Slack in CSV format. Perfect for IT and security teams who need fast visibility into device grouping—without manually logging into Jamf. Slack automatically parses the CSV, making it viewable directly in the chat—no download required. ✅ Prerequisites • A Jamf Pro API key with permissions to read smart groups and computer details • A Slack app or incoming webhook URL with permission to post messages to your desired channel 🔍 How it works • Manually trigger the flow or connect it to a webhook • Fetch the list of smart group IDs (set manually in the workflow) • Loop over each group to get its members • Use a sub-workflow to fetch detailed info for each device • Convert the member list to CSV • Post the CSV file to a Slack channel ⚙️ Set up steps • Takes ~5–10 minutes to configure • Set your Jamf BaseURL and group IDs in the Set nodes • Add your Jamf Pro API credentials to the HTTP Request nodes • Provide your Slack webhook token or channel ID in the Slack node • Optional: Customize CSV fields or formatting as needed
by David Ashby
Complete MCP server exposing all AWS Transcribe Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every AWS Transcribe Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n AWS Transcribe Tool tool with full error handling 📋 Available Operations (4 total) Every possible AWS Transcribe Tool operation is included: 🔧 Transcriptionjob (4 operations) • Create a transcription job • Delete a transcription job • Get a transcription job • Get many transcription jobs 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native AWS Transcribe Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every AWS Transcribe Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Belgacem Dhiflaoui
What Problem Does This Solve? This workflow automates the end-to-end process of capturing company information from Google Drive, storing it semantically in Pinecone, and interacting with users via an intelligent AI chatbot. It eliminates the need for manual customer service, lead tracking, and company information retrieval—offering a fully automated, intelligent engagement system. Perfect for teams that need to: Maintain accurate, AI-readable company knowledge bases Answer customer inquiries 24/7 using AI Automatically collect and log lead information Embed a chatbot into their website to assist potential customers Target Audience: Sales teams, business owners, marketing departments, customer support reps, startup founders, or anyone looking to automate AI-powered lead generation and customer engagement. What Does It Do? Part One – Knowledge Ingestion Monitors** a Google Drive folder for new .txt or document uploads. Downloads** the document and splits the content into manageable chunks using a recursive character splitter. Generates** embeddings via OpenAI. Stores** the embeddings in a Pinecone vector database under the Q&A namespace. Purpose:** This knowledge base is later used to answer business-related questions through AI. Part Two – AI Chatbot Engagement Listens** for incoming chat messages using n8n’s chatTrigger node. Activates an AI agent** (powered by GPT-4o) to respond to inquiries regarding business hours, services, products, or general company info. Retrieves knowledge** using a vector search tool connected to Pinecone (newCompany_q). Captures leads:** If a user shows interest, the AI collects and stores: Name Email Phone number Specific interest into a connected Google Sheet automatically. Key Features 🔄 Google Drive integration for real-time file processing 🧠 OpenAI embedding + Pinecone vector store for semantic memory 🤖 LangChain agent with tool-based reasoning 🗃️ Google Sheets integration for dynamic lead storage 💬 GPT-4o model for accurate, human-like conversation ⚙️ Modular design to expand into CRM, Notion, or email workflows 🌐 Website-ready chatbot endpoint 🧰 Setup Instructions Prerequisites: n8n instance (cloud or self-hosted) Google Drive account (for uploading company data) Pinecone account (for vector storage) OpenAI API key Google Sheets access with OAuth2 credentials 📦 Installation Steps 1. Import the Workflow Upload the JSON files into your n8n instance. 2. Configure Credentials In n8n > Credentials, connect: Google Drive OpenAI Pinecone Google Sheets **3. Set Pinecone Index & Namespace Example:** Index: comanyName Namespace: Q&A 4. Test the Flow Upload a sample .txt or pdf file to the monitored Drive folder. Send a message to the chatbot (e.g., "What are your opening hours?"). Check the Google Sheet for collected user info. How It Works (Behind the Scenes) Part 1 – Data Preparation: Company files are uploaded to Google Drive. File is detected, downloaded, and chunked. Embeddings are created using OpenAI. Data is stored in Pinecone for semantic retrieval. Part 2 – Chat Interaction: A chat message triggers the workflow via webhook. The AI agent interprets the intent and accesses company data via newCompany_q. If lead data is gathered, it is appended to a Google Sheet using the AI-parsed values. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by InfyOm Technologies
✅ What problem does this workflow solve? Many websites lack a smart, searchable interface. Visitors often leave due to unanswered questions. This workflow transforms any website into a Retrieval-Augmented Generation (RAG) chatbot—automatically extracting content, creating embeddings, and enabling real-time, context-aware chat on your own site. ⚙️ What does this workflow do? Accepts a website URL through a form trigger. Fetches and cleans website content. Parses content into smaller sections. Generates vector embeddings using OpenAI (or your embedding model). Stores embeddings and metadata in Supabase’s vector database. When a user asks a question: Searches Supabase for relevant chunks via similarity search. Retrieves matching content as context. Sends context + question to OpenAI to generate an accurate answer. Returns the AI-generated response to the user in the chat interface. 🔧 Setup Instructions 🖥️ Website Form Trigger Use a Form / HTTP Trigger to submit website URLs for indexing. 📥 Content Extraction & Chunking Use HTTP nodes to fetch HTML. Clean and parse it (e.g., remove scripts, ads). Use a Function node to split into manageable text chunks. 🧠 Embedding Generation Call OpenAI (or Cohere) to generate embeddings for each chunk. Insert vectors and metadata into Supabase via its API or n8n Supabase node. 💬 User Query Handling Use a Chat Trigger (webhook/UI) to receive user questions. Convert the question into an embedding. Query Supabase with similarity search (e.g., match_documents RPC). Retrieve top-matching chunks and feed them into OpenAI with the user question. Return the reply to the user. 🛠 AI & Database Setup OpenAI API key** for embedding and chat. A Supabase project with: vector extension enabled Tables for document chunks and embeddings A similarity search function like match_documents 💬 How to Embed the Chat Widget on Your Website You can add the chatbot interface to your website with a simple JavaScript snippet. Steps: Open the "When chat message received" node Copy Chat URL Make sure, "Make Chat Publicly Available "Toggle is enabled Make sure the mode is "Embedded Chat" Follow the instructions given on this package here. 🧠 How it Works Submit URL → Form Trigger Fetch Website Content → HTTP Request Clean & Chunk Content → Function Node Make Embeddings (OpenAI/Cohere) Store in Supabase → embeddings + metadata User Chat → Chat Trigger Search for Similar Content → Supabase similarity match Generate Answer → OpenAI completion w/ context Send Reply → Chat interface returns answer 🗂 Why Supabase? Supabase offers a scalable Postgres-based vector database with extensions like pgvector, making it easy to: Store vector data alongside metadata Run ANN (Approximate Nearest Neighbor) similarity searches Integrate seamlessly with n8n and your chatbot UI :contentReference[oaicite:1]{index=1} 👤 Who can use this? 📝 Documentation websites 👩💼 Support portals 🏢 Product/Landing pages 🛠 Internal knowledge bases Perfect for anyone who wants a smart, website-specific chatbot without building an entire AI stack from scratch. 🚀 Ready to Deploy? Plug in your: ✅ OpenAI API Key ✅ Supabase project credentials ✅ Chat UI or webhook endpoint … and launch your AI-powered, website-specific RAG chatbot in minutes!
by Saswat Saubhagya Rout
📝 Use Case This n8n workflow automates the creation and publication of technical blog posts based on a list of topics stored in Google Sheets. It fetches context using Tavily and Wikipedia, generates Markdown-formatted content with Gemini AI, commits it to a GitHub repository, and updates a Jekyll-powered blog — all without manual intervention. Ideal for developers, bloggers, or content teams who want to streamline technical content creation and publishing. ⚙️ Setup Instructions 🔑 Prerequisites n8n (cloud or self-hosted) Tavily API key Google Sheets with blog topics Gemini (Google Palm) API key GitHub repository (Jekyll enabled) GitHub OAuth2 credentials Google OAuth2 credentials 🧩 Setup Steps Import the workflow JSON into your n8n instance. Set up the following credentials in n8n: Tavily API Google Sheets OAuth2 Google Palm/Gemini AI GitHub OAuth2 Prepare your Google Sheet: Columns: Title, status, row_number Set status to blank for topics to be picked up. Configure: GitHub repo and _posts/ path Jekyll setup (front matter, _config.yml, GitHub Pages) Adjust prompt/custom parameters if needed. Enable and deploy the workflow. Schedule it daily or trigger manually. 🔄 Workflow Details | Node | Function | |------|----------| | Schedule Trigger | Triggers the flow at a set interval | | Google Sheets (Get Topic) | Fetches the next incomplete blog topic | | Extract Topic | Parses topic text from the sheet | | Tavily Search | Gathers up-to-date content related to the topic | | Wikipedia Tool | Optionally adds more context or images | | Summarize Results | Formats the context for the AI | | Gemini AI Agent (LangChain) | Generates a Markdown blog post with YAML front matter | | Set File Parameters | Prepares the filename, content, and commit message | | GitHub Commit | Uploads the .md file to the _posts/ directory | | Update Google Sheet | Marks topic as done after successful commit | 🛠️ Customization Options Change LLM prompt (e.g. tone, depth, format). Use OpenAI instead of Gemini by switching nodes. Modify filename pattern or GitHub repo path. Add Slack/Discord notifications after publish. Extend flow to upload images or embed YouTube links. ⚠️ Community Nodes Used This workflow uses the following community nodes: @tavily/n8n-nodes-tavily.tavily – for deep search > ⚠️ Ensure these are installed and enabled in your n8n instance. 💡 Pro Tips Use GitHub Actions to trigger an automatic Jekyll build post-commit. Structure blog posts with front matter, headings, and table of contents for SEO. Set Schedule Trigger to daily at a fixed time to keep content flowing. Enhance formatting in AI output using code blocks, images, and lists. ✅ Example Output title: "How LLMs Are Changing Web Development" date: "2025-07-25" categories: [webdev, AI] tags: [LLM, Gemini, n8n, automation] excerpt: "Learn how LLMs like Gemini are transforming how we generate and deploy developer content." author: "Saswat Saubhagya" Table of Contents Introduction Understanding LLMs Use Cases in Web Development Challenges Conclusion ...
by Anderson Adelino
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Build intelligent AI chatbot with RAG and Cohere Reranker Who is it for? This template is perfect for developers, businesses, and automation enthusiasts who want to create intelligent chatbots that can answer questions based on their own documents. Whether you're building customer support systems, internal knowledge bases, or educational assistants, this workflow provides a solid foundation for document-based AI conversations. How it works This workflow creates an intelligent AI assistant that combines RAG (Retrieval-Augmented Generation) with Cohere's reranking technology for more accurate responses: Chat Interface: Users interact with the AI through a chat interface Document Processing: PDFs from Google Drive are automatically extracted and converted into searchable vectors Smart Search: When users ask questions, the system searches through vectorized documents using semantic search Reranking: Cohere's reranker ensures the most relevant information is prioritized AI Response: OpenAI generates contextual answers based on the retrieved information Memory: Conversation history is maintained for context-aware interactions Setup steps Prerequisites n8n instance (self-hosted or cloud) OpenAI API key Supabase account with vector extension enabled Google Drive access Cohere API key 1. Configure Supabase Vector Store First, create a table in Supabase with vector support: CREATE TABLE cafeina ( id SERIAL PRIMARY KEY, content TEXT, metadata JSONB, embedding VECTOR(1536) ); -- Create a function for similarity search CREATE OR REPLACE FUNCTION match_cafeina( query_embedding VECTOR(1536), match_count INT DEFAULT 10 ) RETURNS TABLE( id INT, content TEXT, metadata JSONB, similarity FLOAT ) LANGUAGE plpgsql AS $$ BEGIN RETURN QUERY SELECT cafeina.id, cafeina.content, cafeina.metadata, 1 - (cafeina.embedding <=> query_embedding) AS similarity FROM cafeina ORDER BY cafeina.embedding <=> query_embedding LIMIT match_count; END; $$; 2. Set up credentials Add the following credentials in n8n: OpenAI**: Add your OpenAI API key Supabase**: Add your Supabase URL and service role key Google Drive**: Connect your Google account Cohere**: Add your Cohere API key 3. Configure the workflow In the "Download file" node, replace URL DO ARQUIVO with your Google Drive file URL Adjust the table name in both Supabase Vector Store nodes if needed Customize the agent's tool description in the "searchCafeina" node 4. Load your documents Execute the bottom workflow (starting with "When clicking 'Execute workflow'") This will download your PDF, extract text, and store it in Supabase You can repeat this process for multiple documents 5. Start chatting Once documents are loaded, activate the main workflow and start chatting with your AI assistant through the chat interface. How to customize Different document types**: Replace the Google Drive node with other sources (Dropbox, S3, local files) Multiple knowledge bases**: Create separate vector stores for different topics Custom prompts**: Modify the agent's system message for specific use cases Language models**: Switch between different OpenAI models or use other LLM providers Reranking settings**: Adjust the top-k parameter for more or fewer search results Memory window**: Configure the conversation memory buffer size Tips for best results Use high-quality, well-structured documents for better search accuracy Keep document chunks reasonably sized for optimal retrieval Regularly update your vector store with new information Monitor token usage to optimize costs Test different reranking thresholds for your use case Common use cases Customer Support**: Create bots that answer questions from product documentation HR Assistant**: Build assistants that help employees find information in company policies Educational Tutor**: Develop tutors that answer questions from course materials Research Assistant**: Create tools that help researchers find relevant information in papers Legal Helper**: Build assistants that search through legal documents and contracts
by inderjeet Bhambra
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works? This workflow is an intelligent SEO analysis pipeline that ethically scrapes blog content and performs comprehensive SEO evaluation using AI. It receives blog URLs via webhook, validates permissions through robots.txt compliance, extracts content, and generates detailed SEO insights across four strategic dimensions: Content Optimization, Keyword Strategy, Technical SEO, and Backlink Building potential. The system prioritizes ethical web scraping by checking robots.txt permissions before proceeding, ensuring compliance with website policies. Upon successful analysis, it returns a structured JSON report with actionable SEO recommendations, performance scores, and optimization strategies. Technical Specifications Trigger: HTTP POST webhook Processing Time: 30-60 seconds depending on content size AI Model: GPT-4.1 minimum with specialized SEO analysis prompt. Output Format: Structured JSON Error Handling: Graceful failure with informative messages Compliance: Respects website robots.txt policies
by Akash Kankariya
🚀 Discover trending and viral YouTube videos easily with this powerful n8n automation! This workflow helps you perform bulk research on YouTube videos related to any search term, analyzing engagement data like views, likes, comments, and channel statistics — all in one streamlined process. ✨ Perfect for: Content creators wanting to find viral video ideas Marketers analyzing competitor content YouTubers optimizing their content strategy How It Works 🎯 1️⃣ Input Your Search Term — Simply enter any keyword or topic you want to research. 2️⃣ Select Video Format — Choose between short, medium, or long videos. 3️⃣ Choose Number of Videos — Define how many videos to analyze in bulk. 4️⃣ Automatic Data Fetch — The workflow grabs video IDs, then fetches detailed video data and channel statistics from the YouTube API. 5️⃣ Performance Scoring — Videos are scored based on engagement rates with easy-to-understand labels like 🚀 HOLY HELL (viral) or 💀 Dead. 6️⃣ Export to Google Sheets — All data, including thumbnails and video URLs, is appended to your Google Sheet for comprehensive review and easy sharing. Setup Instructions 🛠️ Google API Key Get your YouTube Data API key from Google Developers Console. Add it securely in the n8n credentials manager (do not hardcode). Google Sheets Setup Create a Google Sheet to store your results (template link is provided). Share the sheet with your Google account used in n8n. Update the workflow with your sheet's Document ID and Sheet Name if needed. Run the Workflow Trigger the form webhook via browser or POST call. Enter search term, format, and number of videos. Let it process and check your Google Sheet for insights! Features ✨ Bulk fetches the latest and top-viewed YouTube videos. Intelligent video performance scoring with emojis for quick insights 🔥🎬. Organizes data into Google Sheets with thumbnail previews 🖼️. Easy to customize search parameters via an intuitive form. Fully automated, no manual API calls needed. Get Started Today! 🌟 Boost your YouTube content strategy and stay ahead with this powerful viral video research automation! Try it now on your n8n instance and tap into the world of viral content like a pro 🎥💡
by Jean-Marie Rizkallah
🧩 Jamf Policies Export to Slack Quickly export and review your entire Jamf policy configuration—including triggers, frequencies, and scope—directly in Slack. This enables IT and security teams to audit policy setups without logging into Jamf or generating reports manually. ❗The Problem Jamf Pro lacks a straightforward way to quickly review or share a list of all configured policies, including key attributes like frequency, scope, or triggers. Security teams often need this for audit or compliance reviews, but navigating Jamf’s UI or exporting via the API is time-consuming. 🔧 This Fixes It This workflow fetches all policies, extracts the most relevant fields, compiles them into a csv file, and posts that readble file into a designated Slack channel—automatically or on demand. ✅ Prerequisites • A Jamf Pro API key (OAuth2) with read access to policies • A Slack app with permission to post files into your chosen channel 🔍 How it works • Manually trigger or use the webhook to initiate the flow • Retrieve all policies from Jamf via the XML API • Convert the XML response into JSON • Split and loop through each policy ID • Retrieve detailed data for each policy • Format relevant fields (ID, name, trigger, scope, etc.) • Convert the final data set into an .csv file • Upload the file to your Slack channel ⚙️ Set up steps • Takes ~10 minutes to configure • Set the Jamf BaseURL in the “Jamf Server” node • Configure Jamf OAuth2 credentials in the HTTP Request nodes • Adjust the fields for export in the “Set-fields” node • Set your Slack credentials and target channel in the “Post to Slack” node • Optional: Customize the exported fields or filename 🔄 Automation Ready Schedule this flow daily/weekly, or tie it to change events to keep your team informed.
by Paul
🚀 Google Search Console MCP Server 📋 Description This n8n workflow serves as a Model Context Protocol (MCP) server, connecting MCP-compatible AI tools (like Claude) directly to the Google Search Console APIs. With this workflow, users can automate critical SEO tasks and manage Google Search Console data effortlessly via MCP endpoints. Included Functionalities: 📌 List Verified Sites 📌 Retrieve Detailed Site Information 📌 Access Search Analytics Data 📌 Submit and Manage Sitemaps 📌 Request URL Indexing OAuth2 is fully supported for secure and seamless API interactions. 🛠️ Setup Instructions 🔑 Prerequisites n8n instance** (cloud or self-hosted) Google Cloud project with enabled APIs: Google Search Console API Web Search Indexing API OAuth2 Credentials from Google Cloud ⚙️ Workflow Setup Step 1: Import Workflow Open n8n, select "Import from JSON", and paste this workflow JSON. Step 2: Configure OAuth2 Credentials Navigate to Settings → Credentials. Add new credentials (Google OAuth2 API): Client ID and Client Secret from Google Cloud Scopes: https://www.googleapis.com/auth/webmasters.readonly https://www.googleapis.com/auth/webmasters https://www.googleapis.com/auth/indexing Step 3: Configure Webhooks Webhook URLs auto-generate in MCP Server Trigger node. Ensure webhooks are publicly accessible via HTTPS. Step 4: Testing Test your endpoints with sample HTTP requests to confirm everything is working correctly. 🎯 Usage Examples List Sites**: Fetch all verified Search Console sites. Get Site Info**: Get detailed information about a particular site. Search Analytics**: Pull metrics such as clicks, impressions, and rankings. Submit Sitemap**: Automatically submit sitemaps. Request URL Indexing**: Trigger Google's indexing for specific URLs instantly. 🚩 Use Cases & Applications SEO automation workflows AI-driven SEO analytics Real-time website performance monitoring Automated sitemap management
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for Recipe Recommendation Engine with Bright Data MCP & OpenAI is a powerful automated workflow combines Bright Data's MCP for scraping trending or regional recipe data with OpenAI 4o mini to generate personalized recipe recommendations. This automated workflow is designed for: Food Bloggers & Culinary Creators : Who want to automate the extraction and curation of recipes from across the web to generate content, compile cookbooks, or publish newsletters. Nutritionists & Health Coaches : Who need structured recipe data to analyze ingredients, calories, and nutrition for personalized meal planning or dietary tracking. AI/ML Engineers & Data Scientists : Building models that classify cuisines, predict recipes from ingredients, or generate dynamic meal suggestions using clean, structured datasets. Grocery & Meal Kit Platforms : Who aim to extract recipes to power recommendation engines, ingredient lists, or personalized meal plans. Recipe Aggregator Startups : Looking to scale recipe data collection, filtering, and standardization across diverse cooking websites with minimal human intervention. Developers Integrating Cooking Features : Into apps or digital assistants that offer recipe recommendations, step-by-step cooking instructions, or nutritional insights. What problem is this workflow solving? This workflow solves: Automated recipe data extraction from any public URL AI-driven structured data extraction Scalable looped crawling and processing Real-time notifications and data persistence What this workflow does 1. Set Recipe Extract URL Configure the recipe website URL in the input node Set your Bright Data zone name and authentication 2. Paginated Data Extract Triggers a paginated extraction across multiple pages (recipe listing, index, or search pages) Returns a list of recipe links for processing 3. Loop Over Items Loops through the array of recipe links Each link is passed individually to the scraping engine 4. Bright Data MCP Client (Per Recipe) Scrapes each individual recipe page using scrape_as_html Smartly bypasses common anti-bot protections via Bright Data Web Unlocker 5. Structured Recipe Data Extract (via OpenAI GPT-4o mini) Converts raw HTML to clean text using an LLM preprocessing node Uses OpenAI GPT-4o mini to extract structured data 6. Webhook Notification Pushes the structured recipe data to your configured webhook endpoint Format: JSON payload, ideal for Slack, internal APIs, or dashboards 7. Save Response to Disk Saves the structured recipe JSON information to local file system Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the OpenAi account credentials. Make sure to set the fields as part of Set the Recipe Extract URL. Remember to set the webhook_url to send a webhook notification of recipe response. Set the desired local path in the Write the structured content to disk node to save the recipe response. How to customize this workflow to your needs You can tailor the Recipe Recommendation Engine workflow to better fit your specific use case by modifying the following key components: 1. Input Fields Node Update the Recipe URL to target specific cuisine sites or recipe types (e.g., vegan, keto, regional dishes). 2. LLM Configuration Swap out the OpenAI GPT-4o mini model with another provider (like Google Gemini) if you prefer. Modify the structured data prompt to extract custom fields that you wish. 3. Webhook Notification Configure the Webhook Notification node to point to your preferred integration (e.g., Slack, Discord, internal APIs). 4. Storage Destination Change the Save to Disk node to store the structured recipe data in: A cloud bucket (S3, GCS, Azure Blob etc.) A database (MongoDB, PostgreSQL, Firestore) Google Sheets or Airtable for spreadsheet-style access.