by InfyOm Technologies
โ What problem does this workflow solve? If you're using a self-hosted n8n instance, there's no built-in version history or undo for your workflows. If a workflow is accidentally modified or deleted, there's no way to roll back. This backup workflow solves that problem by automatically syncing your workflows to Google Drive, giving you version control and peace of mind. โ๏ธ What does this workflow do? โฑ Runs on a set schedule (e.g., daily or every 12 hours). ๐ Fetches all workflows from your self-hosted n8n instance. ๐ง Detects changes to avoid duplicate backups. ๐ Creates a dedicated folder for each workflow in Google Drive. ๐พ Uploads new or updated workflow files in JSON format. ๐๏ธ Keeps backup history organized by date. ๐ Allows for easy restore by importing backed-up JSON into n8n. ๐ง Setup Instructions 1. Google Drive Setup Connect your Google Drive account using the Google Drive node in n8n. Choose or create a root folder (e.g., n8n-workflow-backups) where backups will be stored. 2. n8n API Credentials Generate a Personal Access Token from your self-hosted n8n instance: Go to Settings โ API in your n8n dashboard. Copy the token and use it in the HTTP Request node headers as: Authorization: Bearer <your_token> 3. Schedule the Workflow Use the Cron node to schedule this workflow to run at your desired frequency (e.g., once a day or every 12 hours). ๐ง How it Works Step-by-Step Flow: Scheduled Trigger The workflow begins on a timed schedule using the Cron node. Fetch All Workflows Uses the n8n API (/workflows) to retrieve a list of all existing workflows. Loop Through Workflows For each workflow: A folder is created in Google Drive using the workflow name. The workflowโs last updated timestamp is checked against Google Drive backups. Smart Change Detection If the workflow has changed since the last backup: A new .json file is uploaded to the corresponding folder. The file is named with the last updated date of the workflow (YYYY-MM-DD-HH-mm-ss.json) to maintain a versioned history. If no change is detected, the workflow is skipped. ๐ Google Drive Folder Organization Backups are neatly organized by workflow and version: /n8n-workflow-backups/ โโโ google-drive-backup-KqhdMBHIyAaE7p7v/ โ โโโ 2025-07-15-13-03-32.json โ โโโ 2025-07-14-03-08-12.json โโโ resume-video-avatar-KqhdMBHIyAaE8p8vr/ โ โโโ 2025-07-15-23-05-52.json Each folder is named after the workflow's name+id and contains timestamped versions. ๐ง Customization Options ๐ Change Backup Frequency Adjust the Cron node to run backups daily, weekly, or even hourly based on your needs. ๐ค Use a Different Storage Provider You can swap out Google Drive for Dropbox, S3, or another cloud provider with minimal changes. ๐งช Add Workflow Filtering Only back up workflows that are active or match specific tags by filtering results from the n8n API. โป๏ธ How to Restore a Workflow from Backup Go to the Google Drive backup folder for the workflow you want to restore. Download the desired .json file (based on the date). Open your self-hosted n8n instance. Click Import Workflow from the sidebar menu. Upload the JSON file to restore the workflow. > You can choose to overwrite an existing workflow or import it as a new one. ๐ค Who can use this? This template is ideal for: ๐งโ๐ป Developers running self-hosted n8n ๐ข Teams managing large workflow libraries ๐ Anyone needing workflow versioning, rollback, or disaster recovery ๐พ Productivity enthusiasts looking for automated backups ๐ฃ Tip Consider enabling version history in Google Drive so you get even more fine-grained backup recovery options on top of what this workflow provides! ๐ Ready to use? Just plug in your n8n token, connect Google Drive, and schedule your backups. Your workflows are now protected!
by Javier Hita
Follow me on LinkedIn for more! Category: Lead Generation, Data Collection, Business Intelligence Tags: lead-generation, google-maps, rapidapi, business-data, contact-extraction, google-sheets, duplicate-prevention, automation Difficulty Level: Intermediate Estimated Setup Time: 15-20 minutes Template Description Overview This powerful n8n workflow automates the extraction of comprehensive business information from Google Maps using keyword-based searches via RapidAPI's Local Business Data service. Perfect for lead generation, market research, and competitive analysis, this template intelligently gathers business data including contact details, social media profiles, and location information while preventing duplicates and optimizing API usage. Key Features ๐ Keyword-Based Google Maps Scraping**: Search for any business type in any location using natural language queries ๐ง Contact Information Extraction**: Automatically extracts emails, phone numbers, and social media profiles (LinkedIn, Instagram, Facebook, etc.) ๐ซ Smart Duplicate Prevention**: Two-level duplicate detection saves 50-80% on API costs by skipping processed searches and preventing duplicate business entries ๐ Google Sheets Integration**: Seamless data storage with automatic organization and structure ๐ Multi-Location Support**: Process multiple cities, regions, or countries in a single workflow execution โก Rate Limiting & Error Handling**: Built-in delays and error handling ensure reliable, uninterrupted execution ๐ฐ Cost Optimization**: Intelligent batching and duplicate prevention minimize API usage and costs ๐ฑ Comprehensive Data Collection**: Gather business names, addresses, ratings, reviews, websites, verification status, and more Prerequisites Required Services & Accounts RapidAPI Account with subscription to "Local Business Data" API Google Account for Google Sheets integration n8n Instance (cloud or self-hosted) Required Credentials RapidAPI HTTP Header Authentication** for Local Business Data API Google Sheets OAuth2** for data storage and retrieval Setup Instructions Step 1: RapidAPI Configuration Create RapidAPI Account Sign up at RapidAPI.com Navigate to "Local Business Data" API Subscribe to a plan (Basic plan supports 1000 requests/month) Get API Credentials Copy your X-RapidAPI-Key from the API dashboard Note the host: local-business-data.p.rapidapi.com Configure n8n Credential In n8n: Settings โ Credentials โ Create New Type: HTTP Header Auth Name: RapidAPI Local Business Data Add headers: X-RapidAPI-Key: YOUR_API_KEY X-RapidAPI-Host: local-business-data.p.rapidapi.com Step 2: Google Sheets Setup Enable Google Sheets API Go to Google Cloud Console Enable Google Sheets API for your project Create OAuth2 credentials Configure n8n Credential In n8n: Settings โ Credentials โ Create New Type: Google Sheets OAuth2 API Follow OAuth2 setup process Create Google Sheet Structure Create a new Google Sheet with these tabs: keyword_searches sheet: | select | query | lat | lon | country_iso_code | |--------|-------|-----|-----|------------------| | X | Restaurants Madrid | 40.4168 | -3.7038 | ES | | X | Hair Salons Brooklyn | 40.6782 | -73.9442 | US | | X | Coffee Shops Paris | 48.8566 | 2.3522 | FR | stores_data sheet: The workflow will automatically create columns for business data including: business_id, name, phone_number, email, website, full_address, rating, review_count, linkedin, instagram, query, lat, lon, and 25+ more fields Step 3: Workflow Configuration Import the Workflow Copy the provided JSON In n8n: Import from JSON Update Placeholder Values Replace YOUR_GOOGLE_SHEET_ID with your actual Google Sheet ID Update credential references to match your setup Configure Search Parameters (Optional) Adjust limit: 1-100 results per query (default: 100) Modify zoom: 10-18 search radius (default: 13) Change language: EN, ES, FR, etc. (default: EN) How It Works Workflow Process Load Search Criteria: Reads queries marked with "X" from keyword_searches sheet Load Existing Data: Retrieves previously processed data for duplicate detection Filter New Searches: Smart merge identifies only new query+location combinations Process Each Location: Sequential processing prevents API overload Configure Parameters: Prepares search parameters from sheet data API Request: Calls RapidAPI to extract business information Parse Data: Structures and cleans all business information Save Results: Stores new leads in stores_data sheet Rate Limiting: 10-second delay between requests Loop: Continues until all new searches are processed Duplicate Prevention Logic Search Level: Compares new queries against existing data using query+latitude combination, skipping already processed searches. Business Level: Each business receives a unique business_id to prevent duplicate entries even across different searches. Data Extracted Business Information Business name, full address, phone number Website URL, Google My Business rating and review count Business type, price level, verification status Geographic coordinates (latitude/longitude) Detailed location breakdown (street, city, state, country, zip) Contact Details Email addresses (when publicly available) Social media profiles: LinkedIn, Instagram, Facebook, Twitter, YouTube, TikTok, Pinterest Additional phone numbers Direct Google Maps and reviews links Search Metadata Original search query and parameters Extraction timestamp and geographic data API response details for tracking Use Cases Lead Generation Generate targeted prospect lists for B2B sales Build location-specific customer databases Create industry-specific contact lists Develop territory-based sales strategies Market Research Analyze competitor density in target markets Study business distribution
by Hybroht
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. JSON Architect - Dynamically Generate JSON Output Formats for Any AI Agent Overview Version: 1.0 The JSON Architect Workflow is designed to instruct AI agents on the required JSON structure for a given context and create the appropriate JSON output format. This workflow ensures that the generated JSON is validated and tested, providing a reliable JSON output format for use in various applications. โจ Features Dynamic JSON Generation**: Automatically generate the JSON format based on the input requirements. Validation and Testing**: Validate the generated JSON format and test its functionality, ensuring reliability before output. Iterative Improvement**: If the generated JSON is invalid or fails testing, the workflow will attempt to regenerate it until successful or until a defined maximum number of rounds is reached. Structured Output**: The final output is the generated JSON output format, making it easy to integrate with other systems or workflows. ๐ค Who is this for? This workflow is ideal for developers, data scientists, and businesses that require dynamic JSON structures for the responses of AI agents. It is particularly useful for those involved in procedural generation, data interchange formats, configuration management and machine learning model input/output. ๐ก What problem does this solve? The workflow addresses the challenge of generating optimal JSON structures by automating the process of creation, validation, and testing. This approach ensures that the JSON format is appropriate for its intended use, reducing errors and enhancing the overall quality of data interchange. Use-Case examples: ๐ Data Interchange Formats ๐ ๏ธ Procedural Generation ๐ Machine Learning Model Input/Output โ๏ธ Configuration Management ๐ What this workflow does The workflow orchestrates a process where AI agents generate, validate, and test JSON output formats based on the provided input. This approach leads to a more refined and functional JSON output parser. ๐ Workflow Steps Input & Setup: The initial input is provided, and the workflow is configured with necessary parameters. Round Start: Initiates the round of JSON construction, ensuring the input is as expected. JSON Generation & Validation: Generates and validates the JSON output format according to the input. JSON Test: Verifies whether the generated JSON output format works as intended. Validation or Test Fails: If the JSON fails validation or testing, the process loops back to the Round Start for correction. Final Output: The final output is generated based on successful JSON construction, providing a cohesive response. ๐ Expected Input input**: The input that requires a proper JSON structure. max_rounds**: The maximum number of rounds before stopping the loop if it fails to produce and test a valid JSON structure. Suggested: 10. rounds**: The initial number of rounds. Default: 0. ๐ฆ Expected Output input**: The original input used to create the JSON structure. json_format_name**: A snake_case identifier for the generated JSON format. Useful if you plan to reuse it for multiple AI agents or Workflows. json_format_usage**: A description of how to use the JSON output format in an input. Meant to be used by AI agents receiving the JSON output format in their output parser. json_format_valid_reason**: The reason provided by the AI agents explaining why this JSON format works for the input. json_format_structure: The JSON format itself, intended for application through the **Advanced JSON Output Parser custom node. json_format_input: The **input after the JSON output format ( json_format_structure ) has been applied in an AI agent's output parser. ๐ Example An example that includes both the input and the final output is provided in a note within the workflow. โ๏ธ n8n Setup Used n8n Version**: 1.100.1 n8n-nodes-advanced-output-parser**: 1.0.1 Running n8n via**: Podman 4.3.1 Operating System**: Linux โก Requirements to Use/Setup ๐๐ง Credentials & Configuration Obtain the necessary LLM API key and permissions to utilize the workflow effectively. This workflow is dependent on a custom node for dynamically inputting JSON output formats called n8n-nodes-advanced-output-parser. You can find the repository here. Warning: As of 2025-07-09, the custom node creator has warned that this node is not production-ready. Beware when using it in production environments without being aware of its readiness. โ ๏ธ Notes, Assumptions & Warnings This workflow assumes that users have a basic understanding of n8n and JSON configuration. This workflow assumes that users have access to the necessary API keys and permissions to utilize the Mistral API or other LLM APIs. Ensure that the input provided to the AI agents is clear and concise to avoid confusion in the JSON generation process. Ambiguous inputs may lead to invalid or irrelevant JSON output formats. โน๏ธ About Us This workflow was developed by the Hybroht team of AI enthusiasts and developers dedicated to enhancing the capabilities of AI through collaborative processes. Our goal is to create tools that harness the possibilities of AI technology and more.
by Jah coozi
AI Medical Symptom Checker & Health Assistant A responsible, privacy-focused health information assistant that provides general health guidance while maintaining strict safety protocols and medical disclaimers. โ ๏ธ IMPORTANT DISCLAIMER This tool provides general health information only and is NOT a substitute for professional medical advice, diagnosis, or treatment. Always consult qualified healthcare providers for medical concerns. ๐ Key Features Safety First Emergency Detection**: Automatically identifies emergency situations Immediate Escalation**: Provides emergency numbers for critical cases Clear Disclaimers**: Every response includes medical disclaimers No Diagnosis**: Never attempts to diagnose conditions Professional Referral**: Always recommends consulting healthcare providers Core Functionality Symptom Information**: General information about common symptoms Wellness Guidance**: Health tips and preventive care Medication Reminders**: General medication information Multi-Language Support**: Serve diverse communities Privacy Protection**: No data storage, anonymous processing Resource Links**: Connects to trusted health resources ๐ฏ Use Cases General Health Information: Learn about symptoms and conditions Pre-Appointment Preparation: Organize questions for doctors Wellness Education: General health and prevention tips Emergency Detection: Immediate guidance for critical situations Health Resource Navigation: Find appropriate care providers ๐ก๏ธ Safety Protocols Emergency Keywords Detection Chest pain, heart attack, stroke Breathing difficulties Severe bleeding, unconsciousness Allergic reactions, poisoning Mental health crises Response Guidelines Never diagnoses conditions Never prescribes medications Always includes disclaimers Encourages professional consultation Provides emergency numbers when needed ๐ง Setup Instructions Configure OpenAI API Add your API key Set temperature to 0.3 for consistency Review Legal Requirements Check local health information regulations Customize disclaimers as needed Implement required data policies Emergency Contacts Update emergency numbers for your region Add local health resources Include mental health hotlines Test Thoroughly Verify emergency detection Check disclaimer display Test various symptom queries ๐ก Example Interactions General Symptom Query: User: "I have a headache for 3 days" Bot: Provides general headache information, self-care tips, when to see a doctor Emergency Detection: User: "Chest pain, can't breathe" Bot: EMERGENCY response with immediate action steps and emergency numbers Wellness Query: User: "How can I improve my sleep?" Bot: General sleep hygiene tips and healthy habits information ๐ฅ Integration Options Healthcare Websites**: Embed as support widget Telemedicine Platforms**: Pre-consultation tool Health Apps**: General information module Insurance Portals**: Member resource Pharmacy Systems**: General drug information ๐ Compliance & Privacy HIPAA Considerations**: No PHI storage GDPR Compliant**: No personal data retention Anonymous Processing**: Session-based only Audit Trails**: Optional logging for compliance Data Encryption**: Secure transmission ๐จ Limitations Cannot diagnose medical conditions Cannot prescribe treatments Cannot replace emergency services Cannot provide specific medical advice Should not delay seeking medical care ๐ Best Practices Always maintain clear disclaimers Never minimize serious symptoms Encourage professional consultation Keep information general and educational Update emergency contacts regularly Review and update health information Monitor for misuse Maintain audit trails where required ๐ Customization Options Add local emergency numbers Include regional health resources Translate to local languages Integrate with local health systems Add specific disclaimers Customize for specific populations Start providing responsible health information today!
by Oneclick AI Squad
This guide walks you through setting up an automated workflow that compares live flight fares across multiple booking platforms (e.g., Skyscanner, Akasa Air, Air India, IndiGo) using API calls, sorts the results by price, and sends the best deals via email. Ready to automate your flight fare comparison process? Letโs get started! Whatโs the Goal? Automatically fetch and compare live flight fares from multiple platforms using scheduled triggers. Aggregate and sort fare data to identify the best deals. Send the comparison results via email for review or action. Enable 24/7 fare monitoring with seamless integration. By the end, youโll have a self-running system that delivers the cheapest flight options effortlessly. Why Does It Matter? Manual flight fare comparison is time-consuming and often misses the best deals. Hereโs why this workflow is a game-changer: Zero Human Error**: Automated data fetching and sorting ensure accuracy. Time-Saving Automation**: Instantly compare fares across platforms, boosting efficiency. 24/7 Availability**: Monitor fares anytime without manual effort. Cost Optimization**: Focus on securing the best deals rather than searching manually. Think of it as your tireless flight fare assistant that always finds the best prices. How It Works Hereโs the step-by-step magic behind the automation: Step 1: Trigger the Workflow Set Schedule Node**: Triggers the workflow at a predefined schedule to check flight fares automatically. Captures the timing for regular fare updates. Step 2: Process Input Data Set Input Data Node**: Sets the input parameters (e.g., origin, destination, departure date, return date) for flight searches. Prepares the data to be sent to various APIs. Step 3: Fetch Flight Data Skyscanner API Node**: Retrieves live flight fare data from Skyscanner using its API endpoint. Akasa Air API Node**: Fetches live flight fare data from Akasa Air using its API endpoint. Air India API Node**: Collects flight fare data directly from Air Indiaโs API. IndiGo API Node**: Gathers flight fare data from IndiGoโs API. Step 4: Merge API Results Merge API Data Node**: Combines the flight data from Skyscanner and Akasa Air into a single dataset. Merge Both API Data Node**: Merges the data from Air India and IndiGo with the previous dataset. Merge All API Results Node**: Consolidates all API data into one unified result for further processing. Step 5: Analyze and Sort Compare Data and Sorting Price Node**: Compares all flight fares and sorts them by price to highlight the best deals. Step 6: Send Results Send Response via Email Node**: Sends the sorted flight fare comparison results to the user via email for review or action. How to Use the Workflow? Importing this workflow in n8n is a straightforward process that allows you to use this pre-built solution to save time. Below is a step-by-step guide to importing the Flight Fare Comparison Workflow in n8n. Steps to Import a Workflow in n8n Obtain the Workflow JSON Source the Workflow: The workflow is shared as a JSON file or code snippet (provided earlier or exported from another n8n instance). Format: Ensure you have the workflow in JSON format, either as a file (e.g., workflow.json) or copied text. Access the n8n Workflow Editor Log in to n8n: Open your n8n instance (via n8n Cloud or self-hosted). Navigate to Workflows: Go to the Workflows tab in the n8n dashboard. Open a New Workflow: Click Add Workflow to create a blank workflow. Import the Workflow Option 1: Import via JSON Code (Clipboard): In the n8n editor, click the three dots (โฏ) in the top-right corner to open the menu. Select Import from Clipboard. Paste the JSON code (provided earlier) into the text box. Click Import to load the workflow. Option 2: Import via JSON File: In the n8n editor, click the three dots (โฏ) in the top-right corner. Select Import from File. Choose the .json file from your computer. Click Open to import the workflow. Setup Notes API Credentials**: Configure each API node (Skyscanner, Akasa Air, Air India, IndiGo) with the respective API keys and endpoints. Check the API providerโs documentation for details. Email Integration**: Authorize the Send Response via Email node with your email service (e.g., Gmail SMTP settings or an email API like SendGrid). Input Customization**: Adjust the Set Input Data node to include specific origin/destination pairs and date ranges as needed. Schedule Configuration**: Set the desired frequency in the Set Schedule node (e.g., daily at 9 AM IST). Example Input Send a POST request to the workflow (if integrated with a webhook) with: { "origin": "DEL", "destination": "BOM", "departureDate": "2025-08-01", "returnDate": "2025-08-07" } Optimization Tips Error Handling**: Add IF nodes to manage API failures or rate limits. Rate Limits**: Include a Wait node if APIs have strict limits. Data Logging**: Add a node (e.g., Google Sheets) to log all comparisons for future analysis. This workflow transforms flight fare comparison into an automated, efficient process, delivering the best deals directly to your inbox!
by Sachin Shrestha
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This n8n workflow automates invoice management by integrating Gmail, PDF analysis, and Azure OpenAI GPT-4.1, with an optional human verification step for accuracy and control. It's ideal for businesses or individuals who regularly receive invoice emails and want to streamline their accounts payable process with minimal manual effort. The system continuously monitors Gmail for new messages from specified senders. When it detects an email with a PDF attachment and relevant subject line (e.g., "Invoice"), it automatically extracts text from the PDF, analyzes it using Azure OpenAI, and determines if it is a valid invoice. If the AI is uncertain, the workflow sends a manual approval request to a human reviewer. Valid invoices are saved to local storage with a timestamped filename, and a confirmation email is sent upon successful processing. ๐ฏ Who This Is For Small to medium businesses Freelancers or consultants who receive invoices via email IT or automation teams looking to streamline document workflows Anyone using n8n with access to Gmail and Azure OpenAI โ Features Gmail Monitoring** โ Automatically checks for new emails from trusted senders AI-Powered Invoice Detection** โ Uses Azure GPT-4.1 to intelligently verify PDF contents PDF Text Extraction** โ Extracts readable text for analysis Human-in-the-Loop Verification** โ Requests approval when AI confidence is low Secure File Storag**e โ Saves invoices locally with structured filenames Email Notifications** โ Sends confirmations or manual review alerts โ๏ธ Setup Instructions 1. Prerequisites An active n8n instance (self-hosted or cloud) A Gmail account with OAuth2 credentials An Azure OpenAI account with access to the GPT-4.1 model A local directory for saving invoices (e.g., C:/Test/Invoices/) 2. Gmail OAuth2 Setup In n8n, create Gmail OAuth2 credentials. Configure it with Gmail API access (read emails and attachments). Update the Gmail Trigger node to filter by sender email (e.g., sender@gmail.com). 3. Azure OpenAI Setup Create Azure OpenAI API credentials in n8n. Ensure your endpoint is correctly set and GPT-4.1 access is enabled. Link the credentials in the AI Analysis node. 4. Customize Workflow Settings Sender Email โ Update in Gmail Trigger Notification Email โ Update in Send Notification node Save Directory โ Change in Save Invoice node 5. Testing the Workflow Send a test email from the configured sender with a PDF invoice. Wait for the workflow to trigger and check for: File saved in the directory Confirmation email received Manual review request (if needed) ๐ Workflow Steps Gmail Trigger โ Check for PDF Invoice โ Extract PDF Text โ Analyze with GPT-4.1 โ โณ If Invoice: Save & Notify โณ If Uncertain: Request Human Review โณ If Not Invoice: Send Invalid Alert
by Mario
Purpose This workflow enables you to listen to your recent favorites in very hight quality offline without sacrificing all of your storage. How it works This workflow automatically creates a playlist in Spotify named "Downloads" which periodically gets updated so it always contains only a defined amount of the latest liked songs. This enables only the Downloads playlist to set for automatic downloading and thus free up space on the device. Setup The workflow is ready to go. Just select your Spotify credentials and activate the workflow. In Spotify just enable automatic downloads on the automatically created Downloads folder after the first workflow run. Current limitations This setup currently supports a maximum of 50 songs in the Downloads Playlist. This is due to the paylod limits defined by Spotify encountered in the Get liked songs node. Implementing batching would solve the issue.
by n8n Team
This workflow creates/updates ClickUp tasks when Notion database pages are created/updated. All fields in the Notion database are mapped to a ClickUp property. Notion database will require setup before the workflow can be used. See the list of fields available in the setup below. Prerequisites Notion account and Notion credentials. ClickUp account and ClickUp credentials. How it works When a new database page is created in Notion, the workflow creates a new task in ClickUp with all required fields. The new ClickUp task's ID is saved in the Notion database page's "ClickUp ID" field. Then, when the database page is updated in Notion, the workflow updates the specific ClickUp task identified by the "ClickUp ID" field in Notion. Setup This workflow requires that you set up a Notion database. To do so, follow the steps below: In Notion, create a new database. Add the following columns to the database: Task name (renamed from "Name") Status (with type "Select" with the following options: "to do", "in progress", "review", "revision", "complete") Deadline (with type "Date") ClickUp ID (with type "Text") Add any other fields you require. Share the database to n8n. By default, the workflow will fill all the fields provided above, except for any other additional fields you add.
by IvanCore
Disclaimer: This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Important distinction: This template manages Telegram Copilot's UserBots (client accounts), not Telegram Bots. UserBot vs. Bot: Key Differences ๐น Telegram Copilot's UserBots Authenticate as real user accounts (phone number required) Can join groups/channels without "Bot" label Subject to Telegram's client API limits Require manual login (MFA supported) ๐น Telegram Bots Use @BotFather-created tokens Limited to bot API functionality Can't initiate chats with unbidden users No phone number required This template solves the unique challenges of UserBot management through: Core Functionality ๐ก๏ธ Session Reliability Automatic crash recovery (5-step restart sequence) Persistent session monitoring (checks every 6h) Database cleanup via /clear command ๐ฑ Multi-Device Support Manages sessions independently from mobile clients Tracks active devices via /stat command Isolates session data per credential ๐ Smart Notifications Real-time alerts to admin chat Detailed error context with authState snapshots Success confirmations with session metadata Setup Guide Prerequisites Self-hosted n8n instance (community node required) Valid Telegram account for UserBot Telegram bot token for notifications TelePilot credentials with api_id/api_hash Configuration Steps Credential Setup Add TelePilot credentials in n8n Configure Telegram bot token in notification nodes Set admin chat ID for alerts Monitoring Customization Adjust check frequency in Schedule Trigger Modify alert thresholds in Filter nodes Configure retry logic in recovery sequence Session Management Test /start command flow Verify /stat output format Confirm notification delivery Workflow Customization Advanced Options Add secondary notification channels (Email, Slack) Implement escalating alert system Integrate with monitoring dashboards Customize recovery attempt limits Compliance Notes UserBots must comply with Telegram's Terms of Service Not intended for bulk messaging or spam Recommended for legitimate automation use cases Note: UserBots must comply with Telegram ToS. Not for spam/mass messaging. Why This Matters: UserBots enable automation scenarios impossible with regular bots (e.g., group management as normal user, reacting as human account). This workflow keeps them reliably online 24/7.
by inderjeet Bhambra
Who is this for? This workflow is designed for travel bloggers, content creators, social media managers, and anyone who wants to transform their travel photos into engaging written narratives. It's perfect for travelers looking to create compelling stories from their photo collections without spending hours crafting content manually, families wanting to document memorable trips, and digital nomads who need to produce travel content efficiently. What problem is this workflow solving? Converting travel photos into engaging stories is time-consuming and requires both creative writing skills and the ability to analyze visual content meaningfully. This workflow solves the challenge of: Transforming visual memories into compelling written narratives Organizing photos chronologically to create logical story flow Generating professional-quality travel content without writing expertise Analyzing photo content to extract meaningful themes and emotions Creating day-by-day structured narratives from unorganized photo collections Reducing the time spent on manual content creation for travel documentation What this workflow does This AI-powered photo storyteller takes your travel photos and automatically generates immersive, first-person travel narratives. The workflow: Accepts multiple photos through a webhook endpoint Uses OpenAI Vision API (GPT-4o) to analyze each photo's content, emotions, and themes Automatically organizes photos chronologically by date and timestamp Groups photos by travel days and extracts daily themes Leverages GPT-4.1 (minimum required) to craft engaging, first-person travel stories with creative day titles Generates structured narratives with sensory details, cultural observations, and emotional insights Outputs JSON formatted content ready for formatting Creates day-by-day story structure with memorable moments and reflective conclusions Setup Required Credentials: OpenAI API key configured in n8n for both Vision Analysis and Story Generation nodes Ensure you have sufficient OpenAI credits for image analysis and text generation Webhook Configuration: The workflow creates a webhook endpoint at /tripteller-upload Configure your photo upload interface to POST photos array to this endpoint Photos should be sent as base64 encoded data with filename and metadata Photo Requirements: Supported formats: Standard image formats (JPEG, PNG, etc.) Photos should include timestamp metadata for chronological organization Caution Do not upload all photos at once. Start with a small number of photos, like 5 at a time. How to customize this workflow to your needs Story Style Customization: Modify the system prompt in the "Generate Travel Story" node to adjust writing tone (nostalgic, adventurous, poetic, etc.) Customize the story structure by editing the output format requirements Add specific cultural or geographical context prompts for location-specific storytelling Photo Analysis Enhancement: Adjust the Vision Analysis node prompt to focus on specific elements (architecture, food, people, landscapes) Modify the grouping logic in the "Group Photos by Day" node for different time-based organization Add location extraction from EXIF data for geographical context Output Format Adjustment: Customize the final response structure in the "Format Final Response" node Add integration with publishing platforms (blog APIs, social media, etc.) Include additional metadata like location tags, travel duration, or trip statistics Performance Optimization: Adjust the execution timeout based on your typical photo volume Modify the parallel processing approach for large photo collections Add progress tracking for longer processing workflows
by Aitor | 1Node
This n8n workflow processes incoming Telegram messages, differentiating between text and voice messages. How it works: Message Trigger: The workflow initiates when a new message is received via the Telegram "Message Trigger" node. Switch Node: This node acts as a router. It examines the incoming message: If the message is text, it directs the flow along the "text" branch. If the message contains voice, it directs the flow along the "voice" branch. Get Audio File: For audio messages, this node downloads the audio file from Telegram. Transcribe Audio: The downloaded audio file is then sent to an "OpenAI Transcribe Recording" node, which uses OpenAI's whisper-1 speech-to-text model to convert the audio into a text transcript. Send Transcription Message: Regardless of whether the original message was text or transcribed audio, the final text content is then passed to a "Send transcription message" node. Setup Requirements: Telegram Bot Token**: You will need a Telegram bot token configured in the "Message Trigger" node to receive messages. OpenAI API Key**: An OpenAI API key is required for the "Transcribe audio" node to perform speech transcription. Additional Notes: This workflow provides a foundational step for building more complex AI-driven applications. The transcribed text or original text message can be easily piped into an AI agent (e.g., a large language model) for analysis, response generation, or interaction with other tools, extending the bot's capabilities beyond simple message reception and transcription. ๐ Need Help? Feel free to contact us at 1 Node. Get instant access to a library of free resources we created.
by Sarfaraz Muhammad Sajib
๐ง Email Validation Workflow Using APILayer API This n8n workflow enables users to validate email addresses in real time using the APILayer Email Verification API. It's particularly useful for preventing invalid email submissions during lead generation, user registration, or newsletter sign-ups, ultimately improving data quality and reducing bounce rates. โ๏ธ Step-by-Step Setup Instructions Trigger the Workflow Manually: The workflow starts with the Manual Trigger node, allowing you to test it on demand from the n8n editor. Set Required Fields: The Set Email & Access Key node allows you to enter: email: The target email address to validate. access_key: Your personal API key from apilayer.net. Make the API Call: The HTTP Request node dynamically constructs the URL: https://apilayer.net/api/check?access_key={{ $json.access_key }}&email={{ $json.email }} It sends a GET request to the APILayer endpoint and returns a detailed response about the email's validity. (Optional): You can add additional nodes to filter, store, or react to the results depending on your needs. ๐ง How to Customize Replace the manual trigger with a webhook or schedule trigger to automate validations. Dynamically map the email and access_key values from previous nodes or external data sources. Add conditional logic to filter out invalid emails, log them into a database, or send alerts via Slack or Email. ๐ก Use Case & Benefits Email validation is crucial in maintaining a clean and functional mailing list. This workflow is especially valuable in: Sign-up forms where real-time email checks prevent fake or disposable emails. CRM systems to ensure user-entered emails are valid before saving them. Marketing pipelines to minimize email bounce rates and increase campaign deliverability. Using APILayerโs trusted validation service, you can verify whether an email exists, check if itโs a role-based address (like info@ or support@), and identify disposable email servicesโall with a simple workflow. Keywords: email validation, n8n workflow, APILayer API, verify email, real-time email check, clean email list, reduce bounce rate, data accuracy, API integration, no-code automation