by Samir Saci
Tags*: Supply Chain, Logistics, Route Planning, Transportation, GPS API Context Hi! Iβm Samir β a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies improve their logistics operations using data, AI, and automation to reduce costs and minimize environmental footprint. > Letβs use n8n to build smarter and greener transport operations! π¬ For business inquiries, you can add find me on LinkedIn Who is this template for? This workflow is designed for logistics and transport teams who want to automate distance and travel time calculations for truck shipments. Ideal for: Control tower dashboards Transport cost simulations Route optimization studies How does it work? This n8n workflow connects to a Google Sheet where you store city-to-city shipment lanes, and uses the OpenRouteService API to calculate: π Distance (in meters) β±οΈ Travel time (in seconds) πͺͺ Number of route steps Steps: β Load departure/destination city coordinates from a Google Sheet π Loop through each record π Query OpenRouteService using the truck (driving-hgv) profile π§Ύ Extract and store results: distance, duration, number of steps π€ Update the Google Sheet with new values What do I need to get started? This workflow is beginner-friendly and requires: A Google Sheet with route pairs (departure and destination coordinates) A free OpenRouteService API key π Get one here Next Steps ποΈ Follow the sticky notes inside the workflow to: Select your sheet Plug in your API key Launch the flow! π₯ Check the Tutorial π You can customize the workflow to: Add CO2 emission estimates for Sustainability Reporting Connect to your TMS via API or EDI This template was built using n8n v1.93.0 Submitted: June 1, 2025
by Davide
This workflow automates the creation of audiobooks from structured text data using AI-powered text-to-speech and audio processing services. Click here to listen the result of my example. Key Advantages 1. β Fully Automated Audiobook Production The entire pipelineβfrom text retrieval to final audio uploadβis automated. This removes manual steps, reduces human error, and enables repeatable audiobook generation at scale. 2. β Advanced Voice Customization By using voice design prompts (voice description + style instruction), the workflow produces highly expressive and context-aware narration, ideal for audiobooks, storytelling, and branded audio content. 3. β Scalable and API-Safe Architecture The batch processing and looping logic respects external API limits. This makes the workflow robust even for large audiobooks with dozens or hundreds of segments. 4. β Centralized Content Management Google Sheets acts as a lightweight CMS: Easy to edit scripts and voice parameters Clear tracking of processed items Temporary URLs and merge flags ensure full visibility into the workflow state 5. β Asynchronous and Fault-Tolerant The use of wait nodes and status checks allows the workflow to handle long-running audio operations without blocking or failing prematurely. 6. β Seamless Cloud Storage Integration Final audiobooks are automatically stored in Google Drive, making them immediately accessible for distribution, review, or further processing. 7. β Modular and Extensible Design Each step (TTS generation, batching, merging, storage) is modular. This makes it easy to: Swap TTS providers Change storage destinations Add post-processing steps (e.g. metadata, chapter markers) How it Works This workflow automates the creation of audiobooks using AI-generated voice synthesis with custom voice design. The process begins by retrieving script data from a Google Sheets document containing text, speaker information, voice descriptions, and style instructions. The workflow then processes each row in batches, sending the text to the Qwen3-TTS model on Replicate with specified voice parameters to generate individual audio segments. Each generated audio URL is stored back in the spreadsheet. Concurrently, once multiple audio segments are ready, they are merged into a single audio file using an external FFmpeg API service. The system polls for merge completion, retrieves the final merged audio file, and uploads it to Google Drive as a complete audiobook with a timestamped filename. Set up Steps Data Source Configuration: Set up the Google Sheets node to connect to your spreadsheet containing the audiobook script with required columns: Text, Speaker, Voice Description, Style Instruction, Temp URL, and To Merge API Credentials Setup: Configure Replicate API credentials for Qwen3-TTS voice synthesis Set up Fal.run API credentials for FFmpeg audio merging operations Configure Google Drive OAuth2 credentials for uploading the final audiobook Voice Design Parameters: Ensure your spreadsheet contains appropriate voice descriptions and style instructions compatible with the Qwen3-TTS model's requirements Destination Settings: Verify the Google Drive folder ID in the upload node points to your desired storage location for the final audiobook Execution: Trigger the workflow manually to begin processing your script rows and generating the complete audiobook with custom voice design π Subscribe to my new YouTube channel. Here Iβll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Oneclick AI Squad
This automated n8n workflow converts any technical documentation or blog post URL into a professional, step-by-step developer tutorial video complete with AI-generated narration, code syntax highlighting, terminal command animations, and visual diagrams. The system intelligently analyzes documentation structure, extracts code examples, generates natural voiceover narration, creates synchronized visual scenes, and automatically publishes the finished video to YouTube with SEO-optimized descriptions. Fundamental Aspects Webhook-Based Trigger**: Accepts HTTP POST requests containing a documentation URL to initiate the automated video creation pipeline on-demand. Intelligent Content Extraction**: Fetches HTML content, parses documentation structure, extracts code blocks with language detection, identifies headings for organization, and cleans irrelevant elements like navigation and scripts. AI-Powered Tutorial Planning**: Uses Claude AI to analyze documentation content and generate a comprehensive tutorial outline including section titles, duration estimates, narration scripts, visual types (code/terminal/diagram), and learning outcomes. Professional Audio Generation**: Converts narration scripts into high-quality audio using Google Cloud Text-to-Speech with natural-sounding neural voices, proper pacing, and timing synchronization. Dynamic Visual Scene Creation**: Generates code editor scenes with syntax highlighting and typewriter effects, terminal animations with command execution sequences, flowchart diagrams with progressive reveals, and text overlays with key points. Automated Video Rendering**: Combines audio narration with visual scenes using Remotion API to render publication-ready videos in 1080p resolution at 30fps with smooth transitions. Multi-Platform Distribution**: Automatically uploads completed videos to YouTube with AI-generated titles and descriptions, backs up to Google Drive for archival, and returns comprehensive metadata via webhook response. Setup Instructions Import the Workflow into n8n**: Download the workflow JSON file and import via n8n interface under "Workflows" β "Import from File" option. Configure Claude AI (Anthropic) Credentials**: Navigate to the "Analyze with Claude AI" node and click the credentials dropdown. Create new Anthropic credentials using your API key from console.anthropic.com. Ensure you have access to Claude Sonnet 4 model (claude-sonnet-4-20250514). Save and test the connection to verify API access. Set Up Google Cloud Text-to-Speech**: Go to Google Cloud Console and enable the Text-to-Speech API. Create a service account with "Cloud Text-to-Speech User" role. Generate and download a JSON key file for the service account. In n8n, navigate to "Generate Audio with Google TTS" node and add service account credentials. Upload the JSON key file when prompted. Configure Remotion API for Video Rendering**: Sign up for a Remotion account at remotion.dev and obtain API credentials. In the "Render Video with Remotion" node, add HTTP Header Auth credentials. Set authorization header with your Remotion API key. Ensure you have a Remotion composition named "TutorialVideo" deployed. Note: You may need to create a custom Remotion project for code highlighting and terminal animations. Add YouTube OAuth2 Credentials**: Navigate to "Upload to YouTube" node and create YouTube OAuth2 credentials. Follow Google's OAuth flow to authorize n8n to upload videos on your behalf. Ensure your YouTube account has upload permissions and is verified for videos longer than 15 minutes. Configure default privacy settings (public, unlisted, or private) in node parameters. Configure Google Drive Backup**: Go to "Backup to Google Drive" node and add Google Drive OAuth2 credentials. Authorize n8n to access your Google Drive. Optionally specify a folder ID in node options to organize video backups. Activate Webhook Endpoint**: Activate the workflow using the toggle switch in the top-right corner. Copy the webhook URL from the "Webhook Trigger" node (appears after activation). The URL will be in format: https://your-n8n-instance.com/webhook/create-video. Test the Workflow**: Send a test POST request to the webhook URL using curl, Postman, or HTTPie: curl -X POST https://your-n8n-instance.com/webhook/create-video \ -H "Content-Type: application/json" \ -d '{"documentationUrl": "https://docs.example.com/getting-started"}' Monitor the execution in n8n's "Executions" tab to track progress through each node. Check YouTube and Google Drive for the generated video (processing may take 5-15 minutes depending on content length). Verify Output Quality**: Review the generated video for audio quality, code highlighting accuracy, and pacing. Check YouTube description for proper formatting of prerequisites and learning outcomes. Ensure code snippets are readable and terminal animations are properly synchronized. Technical Dependencies Claude AI (Anthropic)**: For intelligent content analysis, tutorial outline generation, section structuring, and narration script writing with natural language processing. Google Cloud Text-to-Speech**: For converting narration scripts into professional-quality audio with neural voice models (en-US-Neural2-J recommended for technical content). Remotion API**: For programmatic video rendering, scene composition, code syntax highlighting, terminal animations, and transition effects (requires custom React components). YouTube Data API v3**: For automated video uploads, metadata management, thumbnail generation, and playlist organization. Google Drive API**: For backup storage, file sharing, and archival of raw video files with organized folder structures. n8n Platform**: For workflow orchestration, webhook handling, conditional logic, error handling, and execution monitoring. JavaScript Runtime**: For custom content parsing, JSON manipulation, code language detection, timing calculations, and data transformation in Code nodes. Customization Possibilities Voice Customization**: Change narrator voice in "Generate Narration Script" node by modifying the voice parameter. Google TTS offers multiple voices (male, female, different accents). Adjust speed (0.25-4.0) and pitch (-20 to +20) for different pacing styles. Use different voices for intro/outro vs main content. Video Branding**: Add custom intro/outro animations by modifying Remotion composition. Include your logo, channel name, and subscribe animations. Customize color schemes in code editor themes (Dracula, Monokai, Solarized, One Dark). Add watermarks or corner branding throughout video. Code Editor Themes**: Change syntax highlighting themes in "Create Visual Scenes" node. Popular options include Dracula (default), VS Code Dark+, GitHub Light, Monokai Pro, Nord. Adjust font sizes, line spacing, and highlighting animation speeds for readability. Content Filtering**: Add pre-processing logic to filter specific documentation sections. Skip changelog entries, API reference tables, or installation instructions if not needed. Focus on tutorial-style content only. Add minimum/maximum content length thresholds. Multi-Language Support**: Extend the workflow to detect documentation language and use appropriate TTS voices. Support Spanish (es-ES), French (fr-FR), German (de-DE), Japanese (ja-JP), and other languages. Generate localized titles and descriptions. Advanced Visual Types**: Add screen recording capabilities for live demonstrations. Include animated flowcharts using Mermaid or D3.js. Generate architecture diagrams from code structure. Add picture-in-picture video of an instructor or animated avatar. Tutorial Complexity Detection**: Use Claude AI to assess documentation difficulty level and adjust pacing accordingly. Beginner content gets slower narration and more detailed explanations. Advanced content can move faster with less repetition. Interactive Elements**: Generate timestamp chapters for YouTube with clickable sections. Create accompanying blog post or GitHub repository with code examples. Generate quiz questions based on content for learning validation. Quality Assurance**: Add validation nodes to check video quality before upload. Verify audio levels are balanced, code is readable at 1080p, and total duration matches expectations. Implement retry logic for failed renders. Batch Processing**: Extend webhook to accept multiple URLs for bulk video generation. Create playlists automatically for related documentation pages. Schedule sequential uploads to avoid flooding your channel. Analytics Integration**: Track video performance by connecting to YouTube Analytics API. Monitor view counts, engagement rates, and audience retention. Use insights to improve future video generation parameters. Cost Optimization**: Implement caching for previously processed documentation URLs to avoid redundant API calls. Use cheaper TTS voices for internal testing. Compress videos before upload while maintaining quality. Set API rate limits to control costs. Custom Remotion Components**: Build specialized React components for your tech stack (e.g., database schema visualizers, API request/response animations, deployment pipeline diagrams). Create reusable templates for common tutorial patterns. Notification System**: Add email or Slack notifications when videos complete processing. Include video URLs, processing time, and any errors encountered. Send daily summaries of generated videos. SEO Enhancement**: Use Claude AI to generate SEO-optimized titles, descriptions, and tags. Research trending keywords in your niche. Auto-generate closed captions and subtitles for accessibility and searchability. Explore More AI Video Automation: Contact us to design custom video automation workflows for product demos, educational content, marketing videos, or AI-powered content creation pipelines tailored to your business needs.
by Airtop
Automating LinkedIn Company Page Discovery Use Case Finding the official LinkedIn page of a company is crucial for tasks like outreach, research, or enrichment. This automation streamlines the process by intelligently searching the companyβs website and LinkedIn to locate the correct profile. What This Automation Does This automation identifies a company's LinkedIn page using the following input parameters: Company domain**: The official website domain of the company (e.g., company.com). Airtop Profile (connected to LinkedIn)**: The name of your Airtop Profile authenticated on LinkedIn. How It Works Launches an Airtop session using the provided authenticated profile. Attempts to extract a LinkedIn link directly from the company's website. If not found, performs a LinkedIn search for the company. If still unsuccessful, falls back to a Google search. Validates the most likely result to confirm itβs a LinkedIn company page. Outputs the verified LinkedIn URL. Setup Requirements Airtop API Key β free to generate. An Airtop Profile logged in to LinkedIn (requires one-time login). Next Steps Combine with People Enrichment**: Use this with LinkedIn profile discovery for full contact + company data workflows. CRM Integration**: Automatically enrich company records with LinkedIn links. Build Custom Outreach Workflows**: Connect company pages to SDR tooling for deeper research and engagement. Read more about how to find Linkedin Company page
by Agent Studio
Who is it for Customer service or support teams who want to use their Zendesk articles in other tools. Content/Knowledge managers consolidating or migrating knowledge bases. Ops/automation specialists who want Markdown versions of articles (could be adapted to Notion, Google Sheets, or any Markdown-friendly system). How to get started Download the template and install it on your instance Set Zendesk and Airtable credentials Modify the Zendesk base_url and Airtable's table and base Run the workflow once manually to get your existing articles Finally, modify the Schedule Trigger (by default it runs every 30 days) and activate the workflow Prerequisites Airtable base** set up using this template. It includes the fields Title, Content, URL and Article ID. Zendesk account** with API access (read permissions for help center articles) Zendesk API credentials** (see instructions below) Airtable API credentials** (see instructions below) Getting Your Credentials Airtable: Sign up or log in to Airtable. Go to your account settings and generate a Personal Access Token (recommended scopes: data.records:read, data.records:write). In n8n, create new Airtable credentials using this token. Zendesk: Log in to your Zendesk dashboard. Go to Admin Center > Apps and Integrations > Zendesk API. Enable βToken Access,β and create an API token. In n8n, add Zendesk credentials with your Zendesk domain, email, and the API token. How it works 1. Triggers Manual:* For first setup, use the Manual Trigger to fetch *all** existing articles. Scheduled:* Automatically runs every N days to fetch *only new or updated** articles since the last run. 2. Fetch Articles from Zendesk Calls the Zendesk Help Center API, using pagination to handle large volumes. 3. Extract and Prepare Data Splits out each article, then collects fields: id, url, title, and body. Converts the article body from HTML to Markdown (for portability and easier reuse). 4. Upsert Into Airtable Inserts new articles, or updates existing ones (using Article ID as the unique key). Fields stored: Title, Content (Markdown), URL, Article ID. Airtable Template Use this Airtable template as your starting point. Make sure the table has columns: Title, Content, URL, Article ID. You can add more depending on your needs. Example Use Cases Migrating Zendesk articles to another knowledge base. Building an internal knowledge hub in Airtable or Notion. Creating Markdown backups for compliance or versioning. Service If you need help implementing the template or modifying it, just reach out π
by Airtop
Automating LinkedIn Connection Requests Use Case Automatically sending LinkedIn connection requests to prospects can significantly streamline your outreach process. This automation ensures you only send requests to users you're not already connected with, and can optionally include a personalized message. What This Automation Does This automation sends a LinkedIn connection request using the following input parameters: linked_url**: The LinkedIn profile URL of the person you want to connect with. airtop_profile**: The name of your Airtop Profile authenticated on LinkedIn. message* *(optional): The note you want to include with your connection request. How It Works Starts an Airtop browser session using your authenticated profile. Opens the target LinkedIn profile in a new browser window. Detects if you're already connected or if a connection request is pending. If the "Connect" button is available: If no message is provided, clicks "Connect" and sends the request without a note. If a message is provided, clicks "Add a note", types the message, and sends the request. Terminates the browser session. Setup Requirements Airtop API Key β free to generate. An Airtop Profile logged in to LinkedIn (requires one-time authentication). Next Steps Pair with People Enrichment**: Use with the LinkedIn Profile Finder to generate URLs before sending requests. CRM Integration**: Log connection attempts and responses in your CRM. Campaign Sequencing**: Combine with message follow-up automations for a complete outreach flow. Read more about automating Linkedin Connection Requests
by Amjid Ali
Proxmox AI Agent with n8n and Generative AI Integration This template automates IT operations on a Proxmox Virtual Environment (VE) using an AI-powered conversational agent built with n8n. By integrating Proxmox APIs and generative AI models (e.g., Google Gemini), the workflow converts natural language commands into API calls, enabling seamless management of your Proxmox nodes, VMs, and clusters. Buy My Book: Mastering n8n on Amazon Full Courses & Tutorials: http://lms.syncbricks.com Watch Video on Youtube How It Works Trigger Mechanism The workflow can be triggered through multiple channels like chat (Telegram, email, or n8n's built-in chat). Interact with the AI agent conversationally. AI-Powered Parsing A connected AI model (Google Gemini or other compatible models like OpenAI or Claude) processes your natural language input to determine the required Proxmox API operation. API Call Generation The AI parses the input and generates structured JSON output, which includes: response_type: The HTTP method (GET, POST, PUT, DELETE). url: The Proxmox API endpoint to execute. details: Any required payload parameters for the API call. Proxmox API Execution The structured output is used to make HTTP requests to the Proxmox VE API. The workflow supports various operations, such as: Retrieving cluster or node information. Creating, deleting, starting, or stopping VMs. Migrating VMs between nodes. Updating or resizing VM configurations. Response Formatting The workflow formats API responses into a user-friendly summary. For example: Success messages for operations (e.g., "VM started successfully"). Error messages with missing parameter details. Extensibility You can enhance the workflow by connecting additional triggers, external services, or AI models. It supports: Telegram/Slack integration for real-time notifications. Backup and restore workflows. Cloud monitoring extensions. Key Features Multi-Channel Input**: Use chat, email, or custom triggers to communicate with the AI agent. Low-Code Automation**: Easily customize the workflow to suit your Proxmox environment. Generative AI Integration**: Supports advanced AI models for precise command interpretation. Proxmox API Compatibility**: Fully adheres to Proxmox API specifications for secure and reliable operations. Error Handling**: Detects and informs you of missing or invalid parameters in your requests. Example Use Cases Create a Virtual Machine Input: "Create a VM with 4 cores, 8GB RAM, and 50GB disk on psb1." Action: Sends a POST request to Proxmox to create the VM with specified configurations. Start a VM Input: "Start VM 105 on node psb2." Action: Executes a POST request to start the specified VM. Retrieve Node Details Input: "Show the memory usage of psb3." Action: Sends a GET request and returns the node's resource utilization. Migrate a VM Input: "Migrate VM 202 from psb1 to psb3." Action: Executes a POST request to move the VM with optional online migration. Pre-Requisites Proxmox API Configuration Enable the Proxmox API and generate API keys in the Proxmox Data Center. Use the Authorization header with the format: PVEAPIToken=<user>@<realm>!<token-id>=<token-value> n8n Setup Add Proxmox API credentials in n8n using Header Auth. Connect a generative AI model (e.g., Google Gemini) via the relevant credential type. Access the Workflow Import this template into your n8n instance. Replace placeholder credentials with your Proxmox and AI service details. Additional Notes This template is designed for Proxmox 7.x and above. For advanced features like backup, VM snapshots, and detailed node monitoring, you can extend this workflow. Always test with a non-production Proxmox environment before deploying in live systems. Start with n8n Learn n8n with Amjid Get n8n Book What is Proxmox
by Kumar Shivam
π§ AI Blog Generator for Shopify Products using GPT-4o The AI Blog Generator is an advanced automation workflow powered by n8n, integrating GPT-4o and Google Sheets to generate SEO-rich blog articles for Shopify products. It automates the entire process β from pulling product data, analyzing images for nutritional information, to producing structured HTML content ready for publishing β with zero manual writing. π‘ Key Advantages π Shopify Product Sync** Automatically pulls product data (title, description, images, etc.) via Shopify API. π€ AI-Powered Nutrition Extraction** Uses GPT-4o to intelligently analyze product images and extract nutritional information. βοΈ SEO Blog Generation** GPT-4o generates blog titles, meta descriptions, and complete articles using both product metadata and extracted nutritional info. ποΈ Structured Content Output** Produces well-formatted HTML with headers, bullet points, and nutrition tables for seamless Shopify blog integration. π Google Sheets Integration** Tracks blog creation, manages retries, and prevents duplicate publishing using a centralized Google Sheet. π€ Shopify Blog API Integration** Publishes the generated blog to Shopify using a two-step blog + article API call. βοΈ How It Works Manual Trigger Initiate the process using a test trigger or a scheduler. Fetch Products from Shopify Retrieves all product details including descriptions and images. Extract Product Images Splits and processes each image individually. OCR + Nutrition AI GPT-4o reads nutrition facts from product images. Skips items without valid info. Check Existing Logs References a Google Sheet to avoid duplicates and determine retry status. AI Blog Generation Creates a blog with headings, bullet points, intro, and a nutrition table. Shopify Blog + Article Posting Uses the Shopify API to publish the blog and its content. Update Google Sheet Logs the blog URL, HTML content, errors, and status for future reference. π οΈ Setup Steps Shopify Node**: Connects to your Shopify store and fetches product data. Split Out Node**: Divides product images for individual OCR processing. OpenAI Node**: Uses GPT-4o to extract nutrition data from images. If Node**: Filters for entries with valid nutrition information. Edit Fields Node**: Formats the product data for AI processing. AI Agent Node**: Generates SEO blog content. Google Sheets Nodes**: Reads and updates blog creation status. HTTP Request Nodes**: Posts the blog and article via Shopifyβs API. π Credentials Required Shopify Access Token** β For retrieving product data and posting blogs OpenAI API Key** β For GPT-4o-based AI generation and image processing Google Sheets OAuth** β For accessing the log sheet π€ Ideal For Ecommerce teams looking to automate content for hundreds of products Shopify store owners aiming to boost organic traffic through blogging Marketing teams building scalable, AI-driven content workflows π¬ Bonus Tip The workflow is modular. You can easily extend it with internal linking, language translation, or even social media sharing β all within the same n8n flow.
by Mihai Farcas
Chat with local LLMs using n8n and Ollama This n8n workflow allows you to seamlessly interact with your self-hosted Large Language Models (LLMs) through a user-friendly chat interface. By connecting to Ollama, a powerful tool for managing local LLMs, you can send prompts and receive AI-generated responses directly within n8n. Use cases Private AI Interactions Ideal for scenarios where data privacy and confidentiality are important. Cost-Effective LLM Usage Avoid ongoing cloud API costs by running models on your own hardware. Experimentation & Learning A great way to explore and experiment with different LLMs in a local, controlled environment. Prototyping & Development Build and test AI-powered applications without relying on external services. How it works When chat message received: Captures the user's input from the chat interface. Chat LLM Chain: Sends the input to the Ollama server and receives the AI-generated response. Delivers the LLM's response back to the chat interface. Set up steps Make sure Ollama is installed and running on your machine before executing this workflow. Edit the Ollama address if different from the default.
by David Olusola
A complete, ready-to-deploy Telegram chatbot template for food delivery businesses. This intelligent assistant handles orders, payments, customer service, and order tracking with human-in-the-loop payment verification. β¨ Key Features π€ AI-Powered Conversations - Natural language order processing using Google Gemini π± Telegram Integration - Seamless customer interaction via Telegram π³ Payment Verification - Screenshot-based payment confirmation with admin approval π Order Tracking - Automatic Google Sheets logging of all orders π§ Memory Management - Contextual conversation memory for better customer experience π Multi-Currency Support - Easily customizable for any currency (USD, EUR, GBP, etc.) π Location Flexible - Adaptable to any city/country π Human Oversight - Manual payment approval workflow for security π οΈ What This Template Includes Core Workflow Customer Interaction - AI assistant takes orders via Telegram Order Confirmation - Summarizes order with total and payment details Information Collection - Gathers customer name, phone, and delivery address Payment Processing - Handles payment screenshots and verification Admin Approval - Human verification of payments before order confirmation Order Tracking - Automatic logging to Google Sheets with delivery estimates Technical Components AI Agent Node - Google Gemini-powered conversation handler Memory System - Maintains conversation context per customer Google Sheets Integration - Automatic order logging and tracking Telegram Nodes - Customer and admin communication Payment Verification - Screenshot detection and approval workflow Conditional Logic - Smart routing based on message types π Quick Setup Guide Prerequisites n8n instance (cloud or self-hosted) Telegram Bot Token Google Sheets API access Google Gemini API key Step 1: Replace Placeholders Search and replace the following placeholders throughout the template: Business Information [YOUR_BUSINESS_NAME] β Your restaurant/food business name [ASSISTANT_NAME] β Your bot's name (e.g., "Alex", "Bella", "Chef Bot") [YOUR_CITY] β Your city [YOUR_COUNTRY] β Your country [YOUR_ADDRESS] β Your business address [YOUR_PHONE] β Your business phone number [YOUR_EMAIL] β Your business email [YOUR_HOURS] β Your operating hours (e.g., "9AM - 11PM daily") Currency & Localization [YOUR_CURRENCY] β Your currency name (e.g., "USD", "EUR", "GBP") [CURRENCY_SYMBOL] β Your currency symbol (e.g., "$", "β¬", "Β£") [YOUR_TIMEZONE] β Your timezone (e.g., "EST", "PST", "GMT") [PREFIX] β Order ID prefix (e.g., "FB" for "Food Business") Menu Items (Customize Completely) [CATEGORY_1] β Food category (e.g., "Burgers", "Pizza", "Sandwiches") [ITEM_1] through [ITEM_8] β Your menu items [PRICE_1] through [DELIVERY_FEE] β Your prices Add or remove categories and items as needed Payment & Support [YOUR_PAYMENT_DETAILS] β Your payment information [YOUR_PAYMENT_PROVIDER] β Your payment method (e.g., "Venmo", "PayPal", "Bank Transfer") [YOUR_SUPPORT_HANDLE] β Your Telegram support username Step 2: Configure Credentials Telegram Bot - Add your bot token to Telegram credentials Google Sheets - Connect your Google account and create/select your orders spreadsheet Google Gemini - Add your Gemini API key Sheet ID - Replace [YOUR_GOOGLE_SHEET_ID] with your actual Google Sheet ID Step 3: Customize Menu Update the menu section in the AI Agent system message with your actual: Food categories Item names and prices Delivery fees Any special offerings or combos Step 4: Test & Deploy Import the template into your n8n instance Test the conversation flow with a test Telegram account Verify Google Sheets logging works correctly Test the payment approval workflow Activate the workflow π° Currency Examples USD Version π MENU & PRICES (USD) Burgers Classic Burger β $12.99 Cheese Burger β $14.99 Deluxe Burger β $18.99 Delivery Fee β $3.99 EUR Version π MENU & PRICES (EUR) Burgers Classic Burger β β¬11.50 Cheese Burger β β¬13.50 Deluxe Burger β β¬17.50 Delivery Fee β β¬3.50 π Google Sheets Structure The template automatically logs orders with these columns: Order ID Customer Name Chat ID Phone Number Delivery Address Order Info Total Price Payment Status Order Status Timestamp π§ Customization Options Easy Customizations Menu Items - Add/remove/modify any food items Pricing - Update to your local pricing structure Currency - Change to any currency worldwide Business Hours - Modify operating hours Delivery Areas - Add location restrictions Payment Methods - Update payment information# Header 1
by Bright Data
π Yelp Business Finder: Scraping Local Businesses by Keyword, Category & Location Using Bright Data and Google Sheets Description: Automate local business data collection from Yelp using AI-powered input validation, Bright Data scraping, and automatic Google Sheets integration. Perfect for market research, lead generation, and competitive analysis. π οΈ How It Works Form Submission: Users submit a simple form with country, location, and business category parameters. AI Validation: Google Gemini AI validates and cleans input data, ensuring proper formatting and Yelp category alignment. Data Scraping: Bright Data's Yelp dataset API scrapes business information based on the cleaned parameters. Status Monitoring: The workflow monitors scraping progress and waits for data completion. Data Export: Final business data is automatically appended to your Google Sheets for easy analysis. π Setup Steps β±οΈ Estimated Setup Time: 10-15 minutes Prerequisites β Active n8n instance (cloud or self-hosted) β Google account with Sheets access β Bright Data account with Yelp scraping dataset β Google Gemini API access Configuration Steps Import Workflow: Copy the provided JSON workflow In n8n: Go to Workflows β + Add workflow β Import from JSON Paste the JSON and click Import Configure Google Sheets: Create a new Google Sheet or use an existing one Set up OAuth2 credentials in n8n Update the Google Sheets node with your document ID Configure column mappings for business data Setup Bright Data: Add your Bright Data API credentials to n8n Replace BRIGHT_DATA_API_KEY with your actual API key Verify your Yelp dataset ID in the HTTP request nodes Test the connection Configure Google Gemini: Add your Google Gemini API credentials Test the AI Agent connection Verify the model configuration Test & Activate: Activate the workflow using the toggle switch Test with sample data: country="US", location="New York", category="restaurants" Verify data appears correctly in your Google Sheet π Data Output π Business Name Official business name from Yelp β Overall Rating Average customer rating (1-5 stars) π Reviews Count Total number of customer reviews π·οΈ Categories Business categories and tags π Website URL Official business website π Phone Number Contact phone number π Address Full business address π Yelp URL Direct link to Yelp listing π― Use Cases π Market Research Analyze local business landscapes and competition π Lead Generation Build prospect lists for B2B outreach πͺ Location Analysis Research business density by area and category π Competitive Intelligence Monitor competitor ratings and customer feedback β οΈ Important Notes: Ensure you comply with Yelp's terms of service and rate limits Bright Data usage may incur costs based on your plan AI validation helps improve data quality and reduce errors Monitor your Google Sheets for data accuracy π§ Troubleshooting Common Issues: API Rate Limits:** Implement delays between requests if needed Invalid Categories:** AI agent helps standardize category names Empty Results:** Verify location spelling and category alignment Authentication Errors:** Check all API credentials and permissions π Ready to start scraping Yelp business data efficiently!
by Cyril Nicko Gaspar
π AI Agent Template with Bright Data MCP Tool Integration This template obtains all the possible tools from Bright Data MCP, process this through chatbot, then run any tool based on the user's query β Problem It Solves The problem that the MCP solves is the complexity and difficulty of traditional automation, where users need to have specific knowledge of APIs or interfaces to trigger backend processes. By allowing interaction through natural language, automatically classifying and routing queries, and managing context and memory effectively, MCP simplifies complex data operations, customer support, and workflow orchestration scenarios where inputs and responses change dynamically. π§° Pre-requisites Before deploying this template, ensure you have: An active n8n instance (self-hosted or cloud). A valid OpenAI API key (or any AI models) Access to Bright Data MCP API with credentials. Basic familiarity with n8n workflows and nodes. βοΈ Setup Instructions **Install the MCP Community Node in N8N In your N8N self-hosted instance, go to Settings β Community Nodes. Search and install n8n-nodes-mcp. Configure Credentials: Add your OpenAI API key or any AI mdeols to the relevant nodes. If you want other AI model, please replace all associated nodes of OpenAI in the workflow Set up Bright Data MCP client credentials in the installed community node (STDIO) Obtain your API in Bright Data and put it in Environment field in the credentials window. It should be written as API_Key=<your api key from Bright Data> π Workflow Functionality (Summary) User message** triggers the workflow. AI Classifier** (OpenAI) interprets the intent and maps it to a tool from Bright Data MCP. If no match is found, the user is notified. If more information is needed, the AI requests it. Memory** preserves context for follow-up actions. The tool is executed, and results are returned contextually to the user. > π§ Optional memory buffer and chat memory manager nodes keep conversations context-aware across multiple messages. π§© Use Cases Data Scraping Automation**: Trigger scraping tasks via chat. Lead Generation Bots**: Use MCP tools to fetch, enrich, or validate data. Customer Support Agents**: Automatically classify and respond to queries with tool-backed answers. Internal Workflow Agents**: Let team members trigger backend jobs (e.g., reports, lookups) by chatting naturally. π οΈ Customization Tool Matching Logic**: Modify the AI classifier prompt and schema to suit different APIs or services. Memory Size and Retention**: Adjust memory buffer size and filtering to fit your appβs complexity. Tool Execution**: Extend the "Execute the tool" sub-workflow to handle additional actions, fallback strategies, or logging. Frontend Integration**: Connect this with various platforms (e.g., WhatsApp, Slack, web chatbots) using the webhook. β Summary This template delivers a powerful no-code/low-code agent that turns chat into automation, combining AI intelligence with real-world tool execution. With minimal setup, you can build contextual, dynamic assistants that drive backend operations using natural language.