by Oneclick AI Squad
In this guide, we’ll walk you through setting up an AI-driven workflow that automatically fetches daily sales, food waste, and customer feedback data from Google Sheets, generates actionable insights using AI, merges them into a comprehensive report, and sends it as an email draft. Ready to automate your restaurant’s daily insights? Let’s dive in! What’s the Goal? Automatically retrieve daily sales data, food waste records, and customer feedback from Google Sheets. Use AI to analyze data and generate insights, including top performers, waste reduction recommendations, and feedback summaries. Merge the insights into a structured daily report. Send the report as an AI-generated email draft for review or sending. Enable scheduled automation for daily insights delivery. By the end, you’ll have a self-running system that delivers daily restaurant insights effortlessly. Why Does It Matter? Manual data analysis and reporting are time-consuming and error-prone. Here’s why this workflow is a game-changer: Zero Human Error**: AI ensures accurate and consistent insights. Time-Saving Automation**: Instantly process data and draft reports, boosting efficiency. Scheduled Delivery**: Receive insights daily without manual effort. Actionable Insights**: Empower your team with data-driven decisions. Think of it as your tireless data analyst that keeps your restaurant informed. How It Works Here’s the step-by-step magic behind the automation: Step 1: Trigger the Workflow Initiate the workflow daily using the Daily Report Scheduler node (e.g., every day at a set time). Step 2: Fetch Daily Sales Data Retrieve sales data from the Google Sheet using the Fetch Daily Sales Data node. Step 3: Fetch Daily Food Waste Records Retrieve food waste data from the Google Sheet using the Fetch Daily Food Waste Records node. Step 4: Fetch Customer Feedback Retrieve customer feedback from the Google Sheet using the Fetch Customer Feedback node. Step 5: Normalize Sales Records Process and standardize sales data for AI analysis. Step 6: Normalize Waste Data Process and standardize food waste data for AI analysis. Step 7: Normalize Feedback Data Process and standardize customer feedback data for AI analysis. Step 8: AI Sales Insights Generator Use AI (e.g., Google Chat Model) to analyze sales data, identify top performers, and provide recommendations. Step 9: AI Waste Reduction Insights Generator Use AI to analyze waste data and suggest reduction strategies. Step 10: AI Feedback Summary Use AI to summarize customer feedback and identify common themes. Step 11: Format Sales Output Structure the sales insights into a readable format. Step 12: Format Waste Output Structure the waste reduction insights into a readable format. Step 13: Format Feedback AI Output Structure the feedback summary into a readable format. Step 14: Merge & Create Email Combine all formatted insights into a single daily report email draft. Step 15: Prepare Email Content Finalize the email content for sending. Step 16: Send Daily Report Send the AI-generated daily summary email via Gmail. How to Use the Workflow? Importing a workflow in n8n is a straightforward process that allows you to use pre-built workflows to save time. Below is a step-by-step guide to importing the Restaurant Daily Insights Automation workflow in n8n. Steps to Import a Workflow in n8n Obtain the Workflow JSON Source the Workflow: Workflows are shared as JSON files or code snippets, e.g., from the n8n community, a colleague, or exported from another n8n instance. Format: Ensure you have the workflow in JSON format, either as a file (e.g., workflow.json) or copied text. Access the n8n Workflow Editor Log in to n8n (via n8n Cloud or self-hosted instance). Navigate to the Workflows tab in the n8n dashboard. Click Add Workflow to create a blank workflow. Import the Workflow Option 1: Import via JSON Code (Clipboard): Click the three dots (⋯) in the top-right corner to open the menu. Select Import from Clipboard. Paste the JSON code into the text box. Click Import to load the workflow. Option 2: Import via JSON File: Click the three dots (⋯) in the top-right corner. Select Import from File. Choose the .json file from your computer. Click Open to import. Setup Notes Google Sheet Columns**: Sales Data Sheet: Date, Item Name, Quantity Sold, Revenue, Cost, Profit. Food Waste Records Sheet: Date, Item Name, Waste Quantity, Reason, Timestamp. Customer Feedback Sheet: Date, Customer Name, Feedback Text, Rating, Timestamp. Google Sheets Credentials**: Configure OAuth2 settings in the fetch nodes with your Google Sheet ID and credentials. AI Models**: Set up the AI nodes (e.g., Google Chat Model) with appropriate API credentials. Gmail Integration**: Authorize the Send Daily Report node with Gmail API credentials to send emails. Scheduling**: Adjust the Daily Report Scheduler node to your preferred time (e.g., daily at 9 AM).
by PUQcloud
Setting up n8n workflow Overview The Docker Grafana WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Garri
Description This workflow is designed to automate the security reputation check of domains and IP addresses using multiple APIs such as VirusTotal, AbuseIPDB, and Google DNS. It assesses potential threats including malicious and suspicious scores, as well as email security configurations (SPF, DKIM, DMARC). The analysis results are processed by AI to produce a concise assessment, then automatically updated into Google Sheets for documentation and follow-up. How It Works Automatic Trigger – The workflow runs periodically via a Schedule Trigger. Data Retrieval – Fetches a list of domains from Google Sheets with status "To do". Domain Analysis – Uses VirusTotal API to get the domain report, perform a rescan, and check IP resolutions. IP Analysis – Checks IP reputation using AbuseIPDB. Email Security Validation – Verifies SPF, DKIM, and DMARC configurations via Google DNS. AI Assessment – Analysis data is processed by AI to produce a short summary in Indonesian. Data Update – The results are automatically updated to Google Sheets, changing the status to "Done" or adding notes if potential threats are found. How to Setup Prepare API Keys Sign up and obtain API keys from VirusTotal and AbuseIPDB. Set up access to Google Sheets API. Configure Credentials in n8n Add VirusTotal API, AbuseIPDB API, and Google Sheets OAuth credentials in n8n. Prepare Google Sheets Create a sheet with columns No, Domain, Customer, Keterangan, Status. Ensure initial data has the status "To do". Import Workflow Upload the workflow JSON file into n8n. Set Schedule Trigger Define the checking interval as needed (e.g., every 1 hour). Test Run Run the workflow manually to ensure all API connections and Google Sheets output work properly.
by Nick Saraev
AI Ad Scraper & Image Generator with Facebook Ad Library Categories: PPC Automation, Creative Generation, Competitive Intelligence This workflow creates an end-to-end ad library scraper and AI image spinner system that automatically discovers competitor ads, analyzes their design elements, and generates multiple unique variations ready for your own campaigns. Built to eliminate 60-70% of manual creative work for PPC agencies, this system transforms competitor research into actionable ad variants in minutes. Benefits Automated Competitor Research** - Scrapes Facebook Ad Library for active competitor campaigns automatically AI-Powered Creative Analysis** - Uses OpenAI vision to comprehensively analyze ad design elements and copy Intelligent Image Generation** - Creates 3+ unique variations per source ad while maintaining effective layouts Complete Asset Organization** - Automatically organizes source ads and generated variations in structured Google Drive folders Campaign-Ready Output** - Generates Google Sheets database with direct links to all assets for immediate campaign deployment Massive Time Savings** - Replaces hours of manual creative work with automated competitive intelligence and generation How It Works Facebook Ad Library Scraping: Connects to Facebook's Ad Library through Apify scraper integration Searches active ads based on keywords, industries, or competitor targeting Filters for image-based ads and removes video-only content for processing Intelligent Asset Organization: Creates unique Google Drive folder structure for each scraped ad campaign Separates source competitor ads from AI-generated variations Maintains organized asset library for easy campaign management and iteration AI-Powered Creative Analysis: Uses OpenAI's vision model to comprehensively describe each competitor ad Identifies design elements, color schemes, layout patterns, and messaging approaches Generates detailed creative briefs for intelligent variation generation Smart Image Variation System: Creates 3 unique style variations per source ad using advanced AI prompting Maintains effective layout structures while changing colors, fonts, and styling Customizes messaging and branding to match your business requirements Campaign Database Integration: Logs all source ads and generated variations in organized Google Sheets Provides direct links to all assets for immediate campaign deployment Tracks performance data and creative iterations for ongoing optimization Required Setup Configuration Google Drive Structure: The workflow automatically creates this folder organization: PPC Thievery (Parent Folder) ├── [Ad Archive ID] (Per Campaign) │ ├── 1. Source Assets (Original competitor ads) │ └── 2. Spun Assets (AI-generated variations) Google Sheets Database Columns: timestamp - Unique record identifier ad_archive_id - Facebook's internal ad identifier page_id - Advertiser's Facebook page ID original_image_url - Direct link to source competitor ad page_name - Advertiser's business name ad_body - Original ad copy text date_scraped - When the ad was discovered spun_prompts - AI-generated variation instructions asset_folder - Link to campaign's Google Drive folder source_folder - Link to original ads folder spun_folder - Link to generated variations folder direct_spun_image_link - Direct link to generated ad image Set Variables Configuration: Update these values in the "Set Variables" node: googleDriveFolderId - Your parent Google Drive folder ID changeRequest - Your brand-specific variation instructions spreadsheetId - Your Google Sheets database ID Apify API Setup: Create Apify account and obtain API key Replace <your-apify-api-key-here> with actual credentials Customize search terms in the JSON body for your target competitors Adjust scraping count (default: 20 ads per run) Business Use Cases PPC Agencies** - Automate competitive research and creative generation for client campaigns E-commerce Brands** - Monitor competitor advertising strategies and generate response campaigns Marketing Teams** - Scale creative production with AI-powered competitive intelligence Freelance Marketers** - Offer advanced competitive analysis and creative services to clients SaaS Companies** - Track competitor messaging and generate differentiated ad variations Agency Teams** - Replace manual creative research with automated competitive intelligence systems Revenue Potential This system revolutionizes PPC agency economics: 60-70% reduction** in manual creative work and competitive research time 3-5x faster** campaign launch times with ready-to-use creative assets $2,000-$5,000 service value** for comprehensive competitive intelligence and creative generation Scalable competitive advantage** through automated monitoring of competitor campaigns Premium positioning** offering AI-powered creative intelligence that competitors can't match manually Difficulty Level: Advanced Estimated Build Time: 2-3 hours Monthly Operating Cost: ~$100 (Apify + OpenAI + Google APIs) Watch My Complete Live Build Want to see me build this entire system from scratch? I walk through every component live - including the ad library integration, AI analysis setup, image generation pipeline, and all the debugging that goes into creating a production-ready competitive intelligence system. 🎥 See My Live Build Process: "Ad Library Scraper & AI Image Spinner System (N8N Build)" This comprehensive tutorial shows the real development process - including advanced AI prompting for image generation, competitive analysis strategies, and the organizational systems that make this scalable for agency use. Set Up Steps Initial Database Setup: Run the initialization flow once to create your Google Drive folder and Sheets database Copy the generated folder ID and spreadsheet ID into the "Set Variables" node Configure your brand-specific change request template for consistent output Apify Integration: Set up Apify account with Facebook Ad Library scraper access Configure API credentials and test with small ad batches Customize search parameters for your target competitors and industries AI Service Configuration: Connect OpenAI API for vision analysis and image generation Set up appropriate rate limiting to control processing costs Test the complete AI pipeline with sample competitor ads Google Services Setup: Configure Google Drive API credentials for automated folder creation Set up Google Sheets integration for campaign database management Test the complete asset organization and tracking workflow Campaign Customization: Define your brand guidelines and messaging requirements in the change request Set up variation templates for different campaign types and industries Configure batch processing limits based on your API usage requirements Production Optimization: Remove the limit node for full-scale competitive monitoring Set up automated scheduling for regular competitive intelligence gathering Monitor and optimize AI prompts based on generated creative quality Advanced Optimizations Scale the system with: Multi-Platform Scraping:** Extend to LinkedIn, Twitter, and Google Ads for comprehensive competitive intelligence Performance Tracking:** Integrate with ad platforms to track performance of generated variations Style Guide Automation:** Create industry-specific variation templates for consistent brand application A/B Testing Integration:** Automatically test generated variations against source ads for performance optimization CRM Integration:** Connect competitive intelligence data with sales and marketing systems Important Considerations API Rate Limits:** Built-in delays prevent service overload and ensure reliable operation Creative Quality:** System generates multiple variations to account for AI generation variability Legal Compliance:** Use generated variations as inspiration while respecting intellectual property rights Cost Management:** Monitor OpenAI image generation costs and adjust batch sizes accordingly Competitive Ethics:** Focus on learning from successful patterns rather than direct copying Why This System Works The competitive advantage lies in speed and scale: Minutes vs. Hours:** Generate campaign-ready creative variations in minutes instead of hours of manual work Systematic Analysis:** AI vision provides consistent, comprehensive analysis that humans might miss Organized Intelligence:** Structured asset management enables rapid campaign deployment and iteration Scalable Monitoring:** Automated competitive research that scales beyond manual capacity Quality Variations:** Multiple AI-generated options ensure high-quality creative output Check Out My Channel For more advanced automation systems and proven agency-building strategies that generate real revenue, explore my YouTube channel where I share the exact methodologies used to scale automation agencies to $72K+ monthly revenue.
by mustafa kendigüzel
How it works This automated workflow discovers trending Instagram posts and creates similar AI-generated content. Here's the high-level process: 1. Content Discovery & Analysis Scrapes trending posts from specific hashtags Analyzes visual elements using AI Filters out videos and duplicates 2. AI Content Generation Creates unique images based on trending content Generates engaging captions with relevant hashtags Maintains brand consistency while being original 3. Automated Publishing Posts content directly to Instagram Monitors publication status Sends notifications via Telegram Set up steps Setting up this workflow takes approximately 15-20 minutes: 1. API Configuration (7-10 minutes) Instagram Business Account setup Telegram Bot creation API key generation (OpenAI, Replicate, Rapid Api) 2. Database Setup (3-5 minutes) Create required database table Configure PostgreSQL credentials 3. Workflow Configuration (5-7 minutes) Set scheduling preferences Configure notification settings Test connection and permissions Detailed technical specifications and configurations are available in sticky notes within the workflow.
by Cyril Nicko Gaspar
📌 HubSpot Lead Enrichment with Bright Data MCP This template enables natural-language-driven automation using Bright Data's MCP tools, triggered directly by new leads in HubSpot. It dynamically extracts and executes the right tool based on lead context—powered by AI and configurable in N8N. ❓ What Problem Does This Solve? Manual lead enrichment is slow, inconsistent, and drains valuable time. This solution automates the process using a no-code workflow that connects HubSpot, Bright Data MCP, and an AI agent—without requiring scripts or technical skills. Perfect for marketing, sales, and RevOps teams. 🧰 Prerequisites To use this template, you’ll need: A self-hosted or cloud instance of N8N A Bright Data MCP API token A valid OpenAI API key (or compatible AI model) A HubSpot account Either a Private App token or OAuth credentials for HubSpot Basic familiarity with N8N workflows ⚙️ Setup Instructions 1. Set Up Authentication in HubSpot 🔐 Option 1: Use a Private App Token (Simple Setup) Log in to your HubSpot account. Navigate to Settings → Integrations → Private Apps. Create a new Private App with the following scopes: crm.objects.contacts.read crm.objects.contacts.write crm.schemas.contacts.read crm.objects.companies.read (optional) Copy the Access Token. In N8N, create a credential for HubSpot App Token and paste the app token in the field. Go back to Hubspot Private App settings to setup a webhook. Copy the url in your workflow's Webhook node and paste it here. 🔁 Option 2: Use OAuth (Advanced + Secure) In HubSpot, go to Settings → Integrations → Apps → Create App. Set your Redirect URL to match your N8N OAuth2 redirect path. Choose scopes like: crm.objects.companies.read crm.objects.contacts.read crm.objects.deals.read crm.schemas.companies.read crm.schemas.contacts.read crm.schemas.deals.read crm.objects.contacts.write (conditionally required) Note the Client ID and Client Secret. Copy the App ID and the developer API key In N8N, create a credential for HubSpot Developer API and paste those info from previous step. Attach these credentials to the HubSpot node in N8N. 2. Setup and obtain API token and other necessary information from Bright Data In your Bright Data account, obtain the following information: API token Web Unlocker zone name (optional) Browser API username and password string separated by colon (optional) 3. Host SSE server from STDIO command The methods below will allow you to receive SSE (Server-Sent Events) from Bright Data MCP via a local Supergateway or Smithery ** Method 1: Run Supergateway in a separate web service (Recommended) This method will work for both cloud version and self-hosted N8N. Signup to any cloud services of your choice (DigitalOcean, Heroku, Hetzner, Render, etc.). For NPM based installation: Create a new web service. Choose Node.js as runtime environment and setup a custom server without repository. In your server’s settings to define environment variables or .env file, add: `API_TOKEN=your_brightdata_api_token WEB_UNLOCKER_ZONE=optional_zone_name BROWSER_AUTH=optional_browser_auth` Paste the following text as a start command: npx -y supergateway --stdio "npx -y @brightdata/mcp" --port 8000 --baseUrl http://localhost:8000 --ssePath /sse --messagePath /message Deploy it and copy the web server URL, then append /sse into it. Your SSE server should now be accessible at: https://your_server_url/sse For Docker based installation: Create a new web service. Choose Docker as the runtime environment. Set up your Docker environment by pulling the necessary images or creating a custom Dockerfile. In your server’s settings to define environment variables or .env file, add: `API_TOKEN=your_brightdata_api_token WEB_UNLOCKER_ZONE=optional_zone_name BROWSER_ZONE=optional_browser_zone_name` Use the following Docker command to run Supergateway: `docker run -it --rm -p 8000:8000 supercorp/supergateway \ --stdio "npx -y @brightdata/mcp /" \ --port 8000` Deploy it and copy the web server URL, then append /sse into it. Your SSE server should now be accessible at: https://your_server_url/sse For more installation guides, please refer to https://github.com/supercorp-ai/supergateway.git. ** Method 2: Run Supergateway in the same web service as the N8N instance This method will only work for self-hosted N8N. a. Set Required Environment Variables In your server's settings to define environment variables or .env file, add: API_TOKEN=your_brightdata_api_token WEB_UNLOCKER_ZONE=optional_zone_name BROWSER_ZONE=optional_browser_zone_name b. Run Supergateway in Background npx -y supergateway --stdio "npx -y @brightdata/mcp" --port 8000 --baseUrl http://localhost:8000 --ssePath /sse --messagePath /message Use the command above to execute it through the cloud shell or set it as a pre-deploy command. Your SSE server should now be accessible at: http://localhost:8000/sse For more installation guides, please refer to https://github.com/supercorp-ai/supergateway.git. * *Method 3: Configure via Smithery.ai* (Easiest) If you don't want additional setup and want to test it right away, follow these instructions: Visit https://smithery.ai/server/@luminati-io/brightdata-mcp/tools to: Signup (if you are new to Smithery) Create an API key Define environment variables via a profile Retrieve your SSE server HTTP URL 4. Connect Google Sheets to N8N Ensure your Google Sheet: Contains columns like row_id, first_name, last_name, email, and status. Is shared with your N8N service account (or connected via OAuth) In N8N: Add a Google Sheets Trigger node Set it to watch for new rows in your lead sheet 5. Import and Configure the N8N Workflow Import the provided JSON workflow into N8N Update nodes with your credentials: Hubspot: Add your API key or connect it via OAuth. Google Sheets Trigger: Link to your actual sheet OpenAI Node: Add your API key Bright Data Tool Execution: Add Bright Data token and SSE URL 🔄 How It Works New contact in Hubspot or a new row is added to the Google Sheet N8N triggers the workflow AI agent classifies the task (e.g., “Find LinkedIn”, “Get company info”) The relevant MCP tool is called Results are appended back to the sheet or routed to another destination Rerun the specific record by specifying status "needs more enrichment", or leaving it blank. 🧩 Use Cases B2B Lead Enrichment** – Add missing fields (title, domain, social profiles) Email Intelligence** – Validate and enrich based on email Market Research** – Pull company or contact data on demand CRM Auto-fill** – Push enriched leads to tools like HubSpot or Salesforce 🛠️ Customization Prompt Tuning** – Adjust how the AI interprets input data Column Mapping** – Customize which fields to pull from the sheet Tool Logic** – Add retries, fallback tools, or confidence-based routing Destination Output** – Integrate with CRMs, Slack, or webhook endpoints ✅ Summary This template turns a Google Sheet into an AI-powered lead enrichment engine. By combining Bright Data’s tools with a natural language AI agent, your team can automate repetitive tasks and scale lead ops—without writing code. Just add a row, and let the workflow do the rest.
by Polina Medvedieva
This workflow automates the process of discovering and extracting APIs from various services, followed by generating custom schemas. It works in three distinct stages: research, extraction, and schema generation, with each stage tracking progress in a Google Sheet. 🙏 Jim Le deserves major kudos for helping to build this sophisticated three-stage workflow that cleverly automates API documentation processing using a smart combination of web scraping, vector search, and LLM technologies. How it works Stage 1 - Research: Fetches pending services from a Google Sheet Uses Google search to find API documentation Employs Apify for web scraping to filter relevant pages Stores webpage contents and metadata in Qdrant (vector database) Updates progress status in Google Sheet (pending, ok, or error) Stage 2 - Extraction: Processes services that completed research successfully Queries vector store to identify products and offerings Further queries for relevant API documentation Uses Gemini (LLM) to extract API operations Records extracted operations in Google Sheet Updates progress status (pending, ok, or error) Stage 3 - Generation: Takes services with successful extraction Retrieves all API operations from the database Combines and groups operations into a custom schema Uploads final schema to Google Drive Updates final status in sheet with file location Ideal for: Development teams needing to catalog multiple APIs API documentation initiatives Creating standardized API schema collections Automating API discovery and documentation Accounts required: Google account (for Sheets and Drive access) Apify account (for web scraping) Qdrant database Gemini API access Set up instructions: Prepare your Google Sheets document with the services information. Here's an example of a Google Sheet – you can copy it and change or remove the values under the columns. Also, make sure to update Google Sheets nodes with the correct Google Sheet ID. Configure Google Sheets OAuth2 credentials, required third-party services (Apify, Qdrant) and Gemini. Ensure proper permissions for Google Drive access.
by Ayham Joumran
How It Works This template is a complete, hands-on tutorial for building a RAG (Retrieval-Augmented Generation) pipeline. In simple terms, you'll teach an AI to become an expert on a specific topic—in this case, the official n8n documentation—and then build a chatbot to ask it questions. Think of it like this: instead of a general-knowledge AI, you're building an expert librarian. 🔧 Workflow Overview The workflow is split into two main parts: Part 1: Indexing the Knowledge (📚 Building the Library) This is a one-time process you run manually. The workflow will: Automatically scrape all pages of the n8n documentation. Break them down into small, digestible chunks. Use an AI model to create a numerical representation (an embedding) for each chunk. Store these embeddings in n8n's built-in Simple Vector Store. > This is like a librarian reading every book and creating a hyper-detailed index card for every paragraph. > ⚠️ Important: This in-memory knowledge base is temporary. It will be erased if you restart your n8n instance. You'll need to run the indexing process again in that case. Part 2: The AI Agent (🧠 The Expert Librarian) This is the chat interface. When you ask a question: The AI agent doesn't guess the answer. It searches the knowledge base to find the most relevant “index cards” (chunks). It feeds those chunks to a language model (Gemini) with strict instructions: > “Answer the user's question using ONLY this information.” This ensures answers are accurate, factual, and grounded in your documents. 🚀 Setup Steps > Total setup time: ~2 minutes > Indexing time: ~15–20 minutes This template uses n8n’s built-in tools, so no external database is needed. 1. Configure OpenAI Credentials You’ll need an OpenAI API key (for GPT models). In your n8n workflow: Go to any of the three OpenAI nodes (e.g., OpenAI Chat Model). Click the Credential dropdown → + Create New Credential. Enter your OpenAI API key and save. 2. Apply Credentials to All Nodes Your new credential is now saved. Go to the other two OpenAI nodes (e.g., OpenAI Embeddings) and select the newly created credential from the dropdown. 3. Build the Knowledge Base Find the Start Indexing manual trigger node (top-left of the workflow). Click the Execute Workflow button to start indexing. > ⚠️ Be patient: This takes 15–20 minutes to scrape and process the full documentation. > You only need to do this once per n8n session. 4. Chat With Your Expert Agent After indexing completes, activate the entire workflow (toggle at the top). Open the RAG Chatbot chat trigger node (bottom-left). Copy its Public URL. Open it in a new tab and ask questions about n8n! Example questions: "How does the IF node work?" "What is a sub-workflow?" 👤 Credits All credits go to Lucas Peyrin 🔗 lucaspeyrin on n8n.io
by Luciano Gutierrez
Healthcare Clinic Assistant with WhatsApp and Telegram Integration Version: 1.1.0 n8n Version: 1.88.0+ License: MIT 📋 Description A comprehensive and modular automation workflow designed for healthcare clinics. It manages patient communication, appointment scheduling, confirmations, rescheduling, internal tasks, and media processing by integrating WhatsApp, Telegram, Google Calendar, and Google Tasks, combined with AI-powered agents for maximum efficiency. This system guarantees proactive communication with patients, streamlined internal clinic management, and consistent data synchronization across platforms. 🌟 Key Features 🤖 AI-Powered Specialized Agents: Distinct agents handle WhatsApp patient support, appointment confirmations, and internal rescheduling tasks. 📱 Omnichannel Communication: Handles patient interactions via WhatsApp and staff commands via Telegram. 📅 Google Calendar Appointment Management: Full synchronization for creating, updating, canceling, and confirming appointments. 📋 Task Management with Google Tasks: Manages shopping lists and administrative tasks efficiently through staff Telegram requests. 🔔 Automated Appointment Reminders: Daily-triggered system proactively sends WhatsApp confirmations to patients for next-day appointments. 🖼️ Intelligent Media Processing: Transcribes audios, extracts text from images, and processes documents using OpenAI and OpenRouter AI models. 🛡️ Escalation to Human Support: Automatically detects sensitive or urgent cases and escalates them to a human agent when needed. 🏥 Use Cases Patient Communication:** Respond to inquiries, schedule, reschedule, and confirm appointments seamlessly via WhatsApp. Internal Clinic Operations:** Allow staff to modify appointments or add shopping list reminders directly from Telegram. Appointment Confirmation System:** Automatically contacts patients one day prior to appointments for confirmation or rescheduling. Task and Reminder Management:** Keeps clinic operations organized through automatic task management with Google Tasks. 🛠️ Technical Implementation WhatsApp Patient Interaction Flow Webhook Reception:** Incoming WhatsApp messages captured via Evolution API webhook. Message Classification:** Intelligent routing of messages based on content type (text, image, audio, document). Media Content Processing:** Audios: Download, convert, and transcribe via OpenAI Whisper. Images: Analyze and extract text/descriptions with OpenAI Vision model. Patient Request Handling:** Specialized WhatsApp assistant responds appropriately using AI prompts. Outbound Message Formatting:** Ensures messages comply with WhatsApp format standards. Message Delivery:** Sends responses back via Evolution API. Telegram Staff Management Flow Telegram Webhook Reception:** Captures messages from authorized staff accounts. Internal Assistant Processing:** Appointment Rescheduling: Identifies and updates appointments through MCP Google Calendar. Task Creation: Adds new entries to the clinic's shopping list using Google Tasks. Notifications and Confirmations:** Sends confirmations back to staff through Telegram. Appointment Reminder System Daily Trigger Activation:** Fires every weekday at 08:00 AM. Calendar Scraping:** Lists next day's appointments from Google Calendar. Patient Contact:** Sends WhatsApp confirmation messages for each appointment. Response Management:** Redirects confirmation or rescheduling replies to appropriate agents. ⚙️ Setup Instructions Import the Workflow n8n → Workflows → Import from File → Upload this JSON file. Configure Credentials Evolution API (WhatsApp Communication) Telegram Bot API (Staff Communication) Google Calendar OAuth2 (Appointment Management) Google Tasks OAuth2 (Task Management) OpenAI and OpenRouter APIs (AI Agents) PostgreSQL Database (Chat Memory) Set Sensitive Variables Replace placeholder values: {sua instância aqui} → Evolution API instance name {número_whatsapp} → WhatsApp numbers {url_do_servidor} → Server URLs {a sua apikey aqui} → API keys {seu_calendario} → Google Calendar ID Customize AI Prompts Adjust system prompts to fit your clinic’s tone, service style, and patient communication guidelines. Set clinic operating hours, escalation rules, and cancellation procedures in AI prompts. Activate and Test Simulate patient messages via WhatsApp. Test Telegram commands from staff members. Validate daily appointment reminders using the scheduled trigger. 🏷️ Tags Healthcare Clinic Management WhatsApp Integration Telegram Bot Appointment Scheduling Google Calendar Google Tasks AI Agents n8n Automation 📚 Technical Notes PostgreSQL is used for persistent chat memory across sessions. Multiple AI Models Used: OpenAI GPT-4.1-nano OpenAI GPT-4.1-mini Google Gemini 2.0 and 2.5 Full media content processing supported (audio, image, text). Compliant escalation workflows ensure patient safety and proper handoff to human staff when necessary. All sensitive patient data are securely stored inside calendar event descriptions for easy retrieval by agents. 📜 License This workflow is provided under the MIT License. Feel free to adapt and customize it for your clinic’s specific needs.
by Jonas
🎧 Daily RSS Digest & Podcast Generation This workflow automates the creation of a daily sports podcast from your favorite news sources. It fetches articles, uses AI to write a digest and a two-person dialogue, and produces a single, merged audio file with KOKORO TTS ready for listening. ✨ How it works: 📰 Fetch & Filter Daily News: The workflow triggers daily, fetches articles from your chosen RSS feeds, and filters them to keep only the most recent content. ✍️ Generate AI Digest & Script: Using Google Gemini, it first creates a written summary of the day's news. A second AI agent then transforms this news into an engaging, conversational podcast script between two distinct AI speakers. 🗣️ Generate Voices in Chunks: The script is split into individual lines of dialogue. The workflow then loops through each line, calling a Text-to-Speech (TTS) API to generate a separate audio file (an MP3 chunk) for each part of the conversation. 🎛️ Merge Audio with FFmpeg: After all the audio chunks are created and saved locally, a command-line script generates a list of all the files and uses FFmpeg to losslessly merge them into a single, seamless MP3 file. All temporary files are then deleted. 📤 Send the Final Podcast: The final, merged MP3 is read from the server and delivered directly to your Telegram chat with a dynamic, dated filename. You can modify: 📰 The RSS Feeds to any news source you want. 🤖 The AI Prompts to change the tone, language, or style of the digest and podcast. 🎙️ The TTS Voices used for the two speakers. 📫 The Final Delivery Method (e.g., send to Discord, save to Google Drive, etc.). Perfect for creating a personalized, hands-free news briefing to listen to on your commute. Inspired by: https://n8n.io/workflows/6523-convert-newsletters-into-ai-podcasts-with-gpt-4o-mini-and-elevenlabs/
by Rostislav
This n8n template provides a complete solution for Optical Character Recognition (OCR) of image and PDF files directly within Telegram Users can simply send PNG, JPEG, or PDF documents to your Telegram bot, and the workflow will process them, extract text using Mistral OCR, and return the content as a downloadable Markdown (.md) text file. Key Features & How it Works: Effortless OCR via Telegram**: Users send a file to the bot, and the system automatically detects the file type (PNG, JPEG, or PDF). File Size Validation: The workflow enforces a **25 MB file size limit, in line with Telegram Bot API restrictions, ensuring smooth operation. Mistral-Powered Recognition: Leveraging **Mistral OCR, the template accurately extracts text from various document types. Markdown Output**: Recognized text is automatically converted into a clean Markdown (.md) text file, ready for easy editing, storage, or further processing. Secure File Delivery: The processed Markdown file is delivered back to the user via Telegram. For this, the workflow ingeniously uses a **GET request to itself (acting as a file downloader proxy). This generated link allows Telegram to fetch the .md file directly. Please note: This download functionality requires the workflow to be in an Active status. Optional Whitelist Security: Enhance your bot's security with an **optional whitelist feature. You can configure specific Telegram User IDs to restrict access, ensuring only authorized users can interact with your bot. Simplified Webhook Management**: The template includes dedicated utility flows for convenient management of your Telegram bot's webhooks (for both development and production environments). This template is ideal for digitizing documents on the go, extracting text from scanned files, or converting image-based content into versatile, searchable text. Getting Started To get this powerful OCR bot up and running, follow these two main steps: Set Up Your Telegram Bot: First, you'll need to configure your Telegram bot and its webhooks. Follow the instructions detailed in the Telegram Bot Webhook Setup section to create your bot, obtain its API token, and set up the necessary webhook URLs. Configure Bot Settings: Next, you'll need to define key operational parameters for your bot. Proceed to the Settings Configuration section and populate the variables according to your preferences, including options for whitelist access.
by ömerDrn
Automated Cryptocurrency Analysis & Reporting with Google Gemini and CoinGecko This powerful template is an n8n workflow that automates cryptocurrency market data analysis and delivers reports directly to your inbox. It fetches real-time data from CoinGecko API, generates AI-powered analysis, and sends the report via email. Features Scheduled Execution**: Runs automatically at a set time daily (default: 10:00 AM). Customizable Analysis**: Personalize analysis content/language via "AI Prompt" nodes. Easy Scalability**: Duplicate node groups to add more cryptocurrencies. Flexible AI Integration**: Defaults to Google Gemini, but supports ChatGPT/Ollama. Setup Instructions n8n Installation: Install n8n (self-hosted or Cloud version). Email Account Setup: Add email service credentials in n8n (e.g., Microsoft Outlook OAuth2). AI Model Credentials (Google Gemini): Obtain API key from Google AI Studio and add to n8n "Credentials". Import Template: Copy the JSON code into n8n as a new workflow. Customization Change Cryptocurrencies**: Update ids= parameter in HTTP Request nodes (e.g., ids=bitcoin). Edit AI Prompt**: Modify text in "AI Prompt" nodes. Use Different AI Model**: Replace Google Gemini with supported alternatives. Update Email Address**: Change recipient in "Send Mail" nodes. `