by Jan Willem Altink
Supabase Storage File Upload Workflow works with selfhosted Supabase ℹ️ How it works • Accepts file data (MIME type, filename, base64 content) from other workflows • Automatically routes files to appropriate storage buckets based on file type (images, audio, video, documents) • Uploads files to Supabase Storage using the REST API • Generates secure signed URLs for file access with 30-day expiration • Returns structured success/error responses for downstream processing 🏗️ Set up steps • Configure Supabase API credentials in n8n • Create storage buckets in your Supabase project (image-files, audio-files, video-files, document-files) (or choose your own structuring system) • Replace url paths with your own • Test the workflow using the included form trigger • Remove test form and integrate with your main workflows 📚 Reference: Supabase Storage Documentation
by William Lettieri
Overview Transform your LLM into a powerful GitHub automation specialist with this n8n workflow template. In a world where multiple MCP servers can overwhelm LLMs with context, this streamlined solution provides a dedicated GitHub Agent that handles all GitHub API operations through a single, specialized tool. When you need GitHub operations like creating repositories, managing issues, or handling pull requests, your LLM can make one simple call to the GitHub Agent. This agent specializes exclusively in GitHub MCP server operations, offloading all contextual complexity and providing clean, efficient GitHub automation. ✨ Features Single MCP Server Trigger** - One tool and one parameter to handle all GitHub API interactions Specialized GitHub Agent** - Dedicated AI agent with direct GitHub MCP Server connection Self-Executing Workflow** - "When Executed by Another Workflow" trigger enables seamless workflow chaining Scalable Architecture** - Ready to integrate with unlimited GitHub tools and operations Context Optimization** - Reduces LLM token usage by delegating GitHub complexity to a specialized agent Flexible Request Processing** - Handles any GitHub operation through natural language requests 🎯 Use Cases Repository Management** - Create, clone, and manage repositories programmatically Issue Tracking** - Automate issue creation, updates, and management workflows Pull Request Automation - Streamline code review and merge processes GitHub Actions Integration** - Trigger and monitor CI/CD workflows Team Collaboration** - Automate notifications and team management tasks Documentation Updates** - Automatically update README files and documentation 🏗️ Workflow Architecture Node Breakdown: MCP Server Trigger - Receives requests with GitHub operation parameters Set GitHub Username - Configures GitHub user context for API calls OpenAI Chat Model - Powers the intelligent GitHub agent with contextual understanding Simple Memory - Maintains conversation context and operation history GitHub AI Agent - Specialized Tools Agent with direct GitHub MCP Server access [MCP Server Trigger] → [Set GitHub Username] → [GitHub AI Agent] ↓ [OpenAI Chat Model] ← [Simple Memory] ← [GitHub API Operations] 📋 Requirements Essential Prerequisites: ✅ OpenAI API Key - For AI Agent and Chat Model functionality ✅ GitHub Username Configuration - Edit the "Set GitHub Username" node with your GitHub username for API calls ✅ n8n Version - Compatible with n8n 2024+ releases ✅ MCP Server Setup - Existing GitHub MCP server configuration Recommended Setup: GitHub Personal Access Token with appropriate permissions Basic understanding of n8n workflow configuration Familiarity with GitHub API operations 🚀 Setup Instructions Step 1: Import and Configure Import the workflow template into your n8n instance Navigate to the Set GitHub Username node Replace the placeholder with your actual GitHub username Step 2: API Keys Setup Configure your OpenAI API key in the Chat Model node Ensure your GitHub credentials are properly configured in n8n Test the connection to verify API access Step 3: MCP Server Integration Connect your existing GitHub MCP server to the workflow Verify the MCP Server Trigger is properly configured Test with a simple GitHub operation (e.g., "List my repositories") Step 4: Deploy and Test Activate the workflow in your n8n instance Test with various GitHub operations to ensure functionality Monitor execution logs for any configuration issues 🔧 Customization Options Agent Behavior Modify the Chat Model prompt** to adjust agent personality and response style Configure memory settings** to control conversation context retention Adjust timeout settings** for long-running GitHub operations GitHub Operations Extend supported operations** by adding new GitHub API endpoints Configure repository filters** to limit scope of operations Set up notification preferences** for important GitHub events Integration Points Webhook triggers** for real-time GitHub event processing Scheduled operations** for regular repository maintenance Cross-workflow triggers** for complex automation chains 💡 Pro Tips Start Simple**: Begin with basic operations like repository listing before attempting complex workflows Monitor Token Usage**: The specialized agent approach significantly reduces OpenAI API costs Batch Operations**: Group related GitHub operations in single requests for efficiency Error Handling**: The agent provides detailed error messages for troubleshooting 🤝 Support and Community Documentation**: Official n8n Documentation Community Forum**: n8n Community Issues & Contributions**: Feel free to suggest improvements or report issues 📄 License This workflow template is provided under the MIT License. You're free to use, modify, and redistribute with attribution. Created by: William Lettieri Version: 1.0 Last Updated: May 28, 2025 Compatibility: n8n 2024+
by Gleb D
This n8n workflow automates the discovery, enrichment, and comparative analysis of startups from the Crunchbase dataset via Bright Data, enhanced with AI, and exports structured results to Google Sheets. 🚀 What It Does Receives a keyword from the user that describes the area of interest — such as an industry, sector, technology, or trend (e.g., "AI in healthcare", "carbon capture", "edtech"). This keyword is used to filter relevant startups from the Crunchbase dataset via Bright Data. Fetches data from Bright Data's Crunchbase snapshot API. Extracts and cleans key fields from the JSON response. Sorts startups by most recent founding date. Selects the top 10 most recent companies. Sends these 10 companies to Google Gemini AI for comparative analysis. Embeds the AI-generated summary into the final export. Appends results to a Google Sheet for tracking and reporting. 🛠️ Step-by-Step Setup Get user keyword input from a form. Use 3 Bright Data requests: Start snapshot. Poll snapshot status until ready. Fetch snapshot data in JSON format. Use a Python Code node to: Parse and sort companies by founded_date. Clean and standardize data fields. Pass the top 10 companies into Gemini AI for comparative insight. Merge the AI output back with company data. Send everything to Google Sheets. 🧠 How It Works Snapshot Control: Polls every few seconds until the Bright Data snapshot is complete. Code Cleanup: Ensures consistent structure and formatting across all records. Comparative AI Analysis: Gemini compares all 10 companies at once and returns a unified analysis. Merging Output: AI analysis is merged into the first company’s record (to avoid duplication), while all 10 are exported. 📤 Google Sheet Output Each row includes: name, founded, about, num_employees, type, ipo_status, full_description, social_media_links, address, website, funding_total, num_investors, lead_investors, founders, products_and_services, monthly_visits, crunchbase_link, ai_analysis. AI comparative analysis summary (only once per batch – attached to the first company). All fields from above customizible through the python code (you can add additional ones from Bright Data output). 🔐 Required Credentials Bright Data* – Replace *YOUR_API_KEY** in 3 HTTP Request nodes. Google Gemini API** – For AI analysis. Google Sheets OAuth2** – For spreadsheet export. ⚠️ Notes AI output is shared once per batch of 10 companies, attached to the first company entry. You can configure the limit of batch size in the first "Code" node.
by ist00dent
This n8n template provides a simple yet powerful utility for validating if a given string input is a valid JSON format. You can use this to pre-validate data received from external sources, ensure data integrity before further processing, or provide immediate feedback to users submitting JSON strings. 🔧 How it works Webhook: This node acts as the entry point for the workflow, listening for incoming POST requests. It expects a JSON body with a single property: jsonString: The string that you want to validate as JSON. Code (JSON Validator): This node contains custom JavaScript code that attempts to parse the jsonString provided in the webhook body. If the jsonString can be successfully parsed, it means it's valid JSON, and the node returns an item with valid: true. If parsing fails, it catches the error and returns an item with valid: false and the specific error message. This logic is applied to each item passed through the node, ensuring all inputs are validated. Respond to Webhook: This node sends the validation result (either valid: true or valid: false with an error message) back to the service that initiated the webhook request. 👤 Who is it for? This workflow is ideal for: Developers & Integrators: Pre-validate JSON payloads from external systems (APIs, webhooks) before processing them in your workflows, preventing errors. Data Engineers: Ensure the integrity of JSON data before storing it in databases or data lakes. API Builders: Offer a dedicated endpoint for clients to test their JSON strings for validity. Customer Support Teams: Quickly check user-provided JSON configurations for errors. Anyone handling JSON data: A quick and easy way to programmatically check JSON string correctness without writing custom code in every application. 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "jsonString": "{\"name\": \"n8n\", \"type\": \"workflow\"}" } Example of an invalid JSON string: { "jsonString": "{name: \"n8n\"}" // Missing quotes around 'name' } The workflow will return a JSON response indicating validity: For a valid JSON string: { "valid": true } For an invalid JSON string: { "valid": false, "error": "Unexpected token 'n', \"{name: \"n8n\"}\" is not valid JSON" } ⚙️ Setup Instructions Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /validate-json). Activate Workflow: Save and activate the workflow. 📝 Tips This JSON validator workflow is a solid starting point. Consider these enhancements: Enhanced Error Feedback: Upgrade: Add a Set node after the Code node to format the error message into a more user-friendly string before responding. Leverage: Make it easier for the caller to understand the issue. Logging Invalid Inputs: Upgrade: After the Code node, add an IF node to check if valid is false. If so, branch to a node that logs the invalid jsonString and error to a Google Sheet, database, or a logging service. Leverage: Track common invalid inputs for debugging or improvement. Transforming Valid JSON: Upgrade: If the JSON is valid, you could add another Function node to parse the jsonString and then operate on the parsed JSON data directly within the workflow. Leverage: Use this validator as the first step in a larger workflow that processes JSON data. Asynchronous Validation: Upgrade: For very large JSON strings or high-volume requests, consider using a separate queueing mechanism (e.g., RabbitMQ, SQS) and an asynchronous response pattern. Leverage: Prevent webhook timeouts and improve system responsiveness.
by Don Jayamaha Jr
A short-term technical analysis agent for 15-minute candles on Binance Spot Market pairs. Calculates and interprets key trading indicators (RSI, MACD, BBANDS, ADX, SMA/EMA) and returns structured summaries, optimized for Telegram or downstream AI trading agents. This tool is designed to be triggered by another workflow (such as the Binance SM Financial Analyst Tool or Binance Quant AI Agent) and is not intended for standalone use. 🔧 Key Features ⏱️ Uses 15-minute kline data (last 100 candles) 📈 Calculates: RSI, MACD, Bollinger Bands, SMA/EMA, ADX 🧠 Interprets numeric data using GPT-4.1-mini 📤 Outputs concise, formatted analysis like: • RSI: 72 → Overbought • MACD: Cross Up • BB: Expanding • ADX: 34 → Strong Trend 🧠 AI Agent Purpose > You are a short-term analysis tool for spotting volatility, early breakouts, and scalping setups. Used by higher agents to determine: Entry/exit precision Momentum shifts Scalping opportunities ⚙️ How it Works Triggered externally by another workflow Accepts input: { "message": "BTCUSDT", "sessionId": "123456789" } Sends POST request to backend endpoint: https://treasurium.app.n8n.cloud/webhook/15m-indicators Fetches last 100 candles and calculates indicators Passes data to GPT for interpretation Returns summary with indicator tags for human readability 🔗 Dependencies This tool is triggered by: ✅ Binance SM Financial Analyst Tool ✅ Binance Spot Market Quant AI Agent 🚀 Setup Instructions Import into your n8n instance Make sure /15m-indicators webhook is active and calculates indicators correctly Connect your OpenAI GPT-4.1-mini credentials Trigger from upstream agent with Binance symbol and session ID Ensure all external calls (to Binance + webhook) are working 🧪 Example Use Cases | Use Case | Result | | ------------------------------------- | --------------------------------------- | | Short-term trade decision for ETHUSDT | Receives 15m signal indicators summary | | Input from Financial Analyst Tool | Returns real-time volatility snapshot | | Telegram bot asks for “DOGE update” | Returns momentum indicators in 15m view | 🎥 Watch Tutorial: 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or resale permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Sam Robertson
Generate Summaries from Uploaded Files using OpenAI Assistants API 📑 Overview Upload a document (PDF, DOCX, PPTX, TXT, CSV, JSON, or Markdown) and receive an AI-generated summary containing: title** – 5-10 words summary** – 1-2 sentences bullets** – 3-5 key points tags** – 3-6 short keywords The workflow: Stores the file in OpenAI. Runs an Assistant with File Search and Code Interpreter enabled. Polls until the run finishes. Retrieves the summary JSON. ✅ Prerequisites OpenAI Assistant Create one at <https://platform.openai.com/assistants> Enable File Search and Code Interpreter Note: The assistant ID starts with asst_ OpenAI API credential setup in n8n Go to Credentials → New → HTTP Header Auth Header name: Authorization Value: Bearer YOUR-OPENAI-API-KEY (replace YOUR-OPENAI-API-KEY with your OpenAI API secret key for your assistant, starts with sk-) Name it: openAIApiHeader 🔧 Setup Import the workflow JSON. When n8n prompts for a credential, choose openAIApiHeader for every HTTP Request node. Open Run Assistant → Body and replace "assistant_id": "REPLACE_WITH_YOUR_ASSISTANT_ID" with your real ID (starts with asst_…). Save. 🚀 How it works | # | Node | Purpose | |---|------|---------| | 1 | On form submission | User uploads a file (File). | | 2 | Upload File | POST /v1/files (multipart) → returns file_id. | | 3 | Create Thread | Creates a thread and attaches the uploaded file. | | 4 | Run Assistant | Starts the run using your assistant_id. | | 5 | Poll Run Status → Wait 2 s → IF | Loops until status = completed. | | 6 | Fetch Summary | GET /v1/threads/{thread_id}/messages → summary JSON. | 🖌️ Customisation ideas Edit the user prompt in Create Thread to change summary length, tone, or language. Add an HTTP Response node after Fetch Summary to return plaintext to the uploader. Replace the polling loop with OpenAI’s forthcoming wait-for-run endpoint when available. No community nodes required. Works on any n8n Cloud plan (Starter, Pro, Enterprise) or self-hosted Community Edition.
by Joachim Hummel
This n8n workflow automates posting Amazon affiliate products to Mastodon — complete with image upload, description, and a shortened tracking URL using Shlink. 🔧 How it works Input Source: The workflow starts by reading from a connected Google Sheet that contains: SHlink (Shortlink) Amazon Link Description (Optional) PicURL Send /NO or YES A Send column (used as a flag to check if the row was already posted) Image Upload: It fetches the product image via HTTP and uploads it directly to a Mastodon instance via the /media API endpoint. URL Shortening (Shlink): The original Amazon URL is shortened using your self-hosted or cloud-hosted Shlink instance to enable click tracking and better presentation. Text Generation: A two-line promotional text is automatically generated using a Language Model (LLM), based on the product description. Posting to Mastodon: The post is then published on Mastodon with: The image The generated text The shortened Shlink URL Row Update: Once published, the Send column in the Google Sheet is updated to "YES" to prevent duplicates. Requirements ✅ Shlink – Required for shortening and tracking Amazon URLs ✅ Google Sheet – Used as a product queue and post ✅ Google Sheet Example https://link.unixweb.home64.de/w7VqY ✅ Mastodon account – OAuth2 credentials with write scope ✅ Product image URL – Must be valid and accessible ✅ n8n credentials – Set up for Google Sheets, Mastodon, and optionally OpenRouter or other LLM providers This workflow is ideal for content creators, affiliate marketers, and automation fans who want to save time and optimize reach across the Fediverse. #affiliate #amazon #mastodon #advertisment
by Nukeador
Who is this for? BlueSky users looking to automate the publication of new posts based on new items from a RSS feed. What this workflow does This will create a BlueSky post with each new RSS feed item, including the feed title, post image, link and content (up to 200 characters) Setup You'll need to generate a BlueSky app password Configure your feed URL in the first node Configure your credentials in the second node How to customize this workflow to your needs You can modify the message posted in the `Create post node, changing the JSON text` value, in case you want to include only the feed item title instead of the content. If you RSS feed doesn't provide an image, you can define a static one on the `Download image` node.
by Shiv Gupta
🎵 TikTok Post Scraper via Keywords | Bright Data + Sheets Integration 📝 Workflow Description Automatically scrapes TikTok posts based on keyword search using Bright Data API and stores comprehensive data in Google Sheets for analysis and monitoring. 🔄 How It Works This workflow operates through a simple, automated process: Keyword Input:** User submits search keywords through a web form Data Scraping:** Bright Data API searches TikTok for posts matching the keywords Processing Loop:** Monitors scraping progress and waits for completion Data Storage:** Automatically saves all extracted data to Google Sheets Result Delivery:** Provides comprehensive post data including metrics, user info, and media URLs ⏱️ Setup Information Estimated Setup Time: 10-15 minutes This includes importing the workflow, configuring credentials, and testing the integration. Most of the process is automated once properly configured. ✨ Key Features 📝 Keyword-Based Search Search TikTok posts using specific keywords 📊 Comprehensive Data Extraction Captures post metrics, user profiles, and media URLs 📋 Google Sheets Integration Automatically organizes data in spreadsheets 🔄 Automated Processing Handles scraping progress monitoring 🛡️ Reliable Scraping Uses Bright Data's professional infrastructure ⚡ Real-time Updates Live status monitoring and data processing 📊 Data Extracted | Field | Description | Example | |-------|-------------|---------| | url | TikTok post URL | https://www.tiktok.com/@user/video/123456 | | post_id | Unique post identifier | 7234567890123456789 | | description | Post caption/description | Check out this amazing content! #viral | | digg_count | Number of likes | 15400 | | share_count | Number of shares | 892 | | comment_count | Number of comments | 1250 | | play_count | Number of views | 125000 | | profile_username | Creator's username | @creativity_master | | profile_followers | Creator's follower count | 50000 | | hashtags | Post hashtags | #viral #trending #fyp | | create_time | Post creation timestamp | 2025-01-15T10:30:00Z | | video_url | Direct video URL | https://video.tiktok.com/tos/... | 🚀 Setup Instructions Step 1: Prerequisites n8n instance (self-hosted or cloud) Bright Data account with TikTok scraping dataset access Google account with Sheets access Basic understanding of n8n workflows Step 2: Import Workflow Copy the provided JSON workflow code In n8n: Go to Workflows → + Add workflow → Import from JSON Paste the JSON code and click Import The workflow will appear in your n8n interface Step 3: Configure Bright Data In n8n: Navigate to Credentials → + Add credential → Bright Data API Enter your Bright Data API credentials Test the connection to ensure it's working Update the workflow nodes with your dataset ID: gd_lu702nij2f790tmv9h Replace BRIGHT_DATA_API_KEY with your actual API key Step 4: Configure Google Sheets Create a new Google Sheet or use an existing one Copy the Sheet ID from the URL In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Update the Google Sheets node with your Sheet ID Ensure the sheet has a tab named "Tiktok by keyword" Step 5: Test the Workflow Activate the workflow using the toggle switch Access the form trigger URL to submit a test keyword Monitor the workflow execution in n8n Verify data appears in your Google Sheet Check that all fields are populated correctly ⚙️ Configuration Details Bright Data API Settings Dataset ID:** gd_lu702nij2f790tmv9h Discovery Type:** discover_new Search Method:** keyword Results per Input:** 2 posts per keyword Include Errors:** true Workflow Parameters Wait Time:** 1 minute between status checks Status Check:** Monitors until scraping is complete Data Format:** JSON response from Bright Data Error Handling:** Automatic retry on incomplete scraping 📋 Usage Guide Running the Workflow Access the form trigger URL provided by n8n Enter your desired keyword (e.g., "viral dance", "cooking tips") Submit the form to start the scraping process Wait for the workflow to complete (typically 2-5 minutes) Check your Google Sheet for the extracted data Best Practices Use specific, relevant keywords for better results Monitor your Bright Data usage to stay within limits Regularly backup your Google Sheets data Test with simple keywords before complex searches Review extracted data for accuracy and completeness 🔧 Troubleshooting Common Issues 🚨 Scraping Not Starting Verify Bright Data API credentials are correct Check dataset ID matches your account Ensure sufficient credits in Bright Data account 🚨 No Data in Google Sheets Confirm Google Sheets credentials are authenticated Verify sheet ID is correct Check that the "Tiktok by keyword" tab exists 🚨 Workflow Timeout Increase wait time if scraping takes longer Check Bright Data dashboard for scraping status Verify keyword produces available results 📈 Use Cases Content Research Research trending content and hashtags in your niche to inform your content strategy. Competitor Analysis Monitor competitor posts and engagement metrics to understand market trends. Influencer Discovery Find influencers and creators in specific topics or industries. Market Intelligence Gather data on trending topics, hashtags, and user engagement patterns. 🔒 Security Notes Keep your Bright Data API credentials secure Use appropriate Google Sheets sharing permissions Monitor API usage to prevent unexpected charges Regularly rotate API keys for better security Comply with TikTok's terms of service and data usage policies 🎉 Ready to Use! Your TikTok scraper is now configured and ready to extract valuable data. Start with simple keywords and gradually expand your research as you become familiar with the workflow. Need Help? Visit the n8n community forum or check the Bright Data documentation for additional support and advanced configuration options. For any questions or support, please contact: Email or fill out this form
by Nick Saraev
Deep Multiline Icebreaker System (AI-Powered Cold Email Personalization) Categories: Lead Generation, AI Marketing, Sales Automation This workflow creates an advanced AI-powered cold email personalization system that achieves 5-10% reply rates by generating deeply personalized multi-line icebreakers. The system scrapes comprehensive website data, analyzes multiple pages per prospect, and uses advanced AI prompting to create custom email openers that make recipients believe you've personally researched their entire business. Benefits Superior Response Rates** - Achieves 5-10% reply rates vs. 1-2% for standard cold email campaigns Deep Website Intelligence** - Scrapes and analyzes multiple pages per prospect, not just homepages Advanced AI Personalization** - Uses sophisticated prompting techniques with examples and formatting rules Complete Lead Pipeline** - From Apollo search to personalized icebreakers in Google Sheets Scalable Processing** - Handle hundreds of prospects with intelligent batching and error handling Revenue-Focused Approach** - System designed around proven $72K/month agency methodologies How It Works Apollo Lead Acquisition: Integrates directly with Apollo.io search URLs through Apify scraper Processes 500+ leads per search with comprehensive contact data Filters for prospects with both email addresses and accessible websites Multi-Page Website Scraping: Scrapes homepage to extract all internal website links Processes relative URLs and filters out external/irrelevant links Performs intelligent batching to prevent IP blocking during scraping Comprehensive Content Analysis: Converts HTML to markdown for efficient AI processing Uses GPT-4 to generate detailed abstracts of each webpage Aggregates insights from multiple pages into comprehensive prospect profiles Advanced AI Icebreaker Generation: Employs sophisticated prompting with system messages, examples, and formatting rules Uses proven icebreaker templates that reference non-obvious website details Generates personalized openers that imply deep manual research Smart Data Processing: Removes duplicate URLs and handles scraping errors gracefully Implements token limits to control AI processing costs Organizes final output in structured Google Sheets format Required Google Sheets Setup Create a Google Sheet with these exact tab and column structures: Search URLs Tab: URL - Contains Apollo.io search URLs for your target audiences Leads Tab (Output): first_name - Contact's first name last_name - Contact's last name email - Contact's email address website_url - Company website URL headline - Job title/position location - Geographic location phone_number - Contact phone (if available) multiline_icebreaker - AI-generated personalized opener Setup Instructions: Create Google Sheet with "Search URLs" and "Leads" tabs Add your Apollo search URLs to the first tab (one per row) Connect Google Sheets OAuth credentials in n8n Update the Google Sheets document ID in all sheet nodes The workflow reads from Search URLs and outputs to Leads automatically Apollo Search URL Format: Your search URLs should look like: https://app.apollo.io/#/people?personLocations[]=United%20States&personTitles[]=ceo&qKeywords=marketing%20agency&page=1 Business Use Cases AI Automation Agencies** - Generate high-converting prospect outreach for service-based businesses B2B Sales Teams** - Create personalized cold email campaigns that actually get responses Marketing Agencies** - Offer premium personalization services to clients Consultants** - Build authority through deeply researched prospect outreach SaaS Companies** - Improve demo booking rates through personalized messaging Professional Services** - Stand out from generic sales emails with custom insights Revenue Potential This system transforms cold email economics: 5-10x Higher Response Rates** than standard cold email approaches $72K/month proven methodology** - exact system used to scale successful AI agency Premium Positioning** - prospects assume you've done extensive manual research Scalable Personalization** - process hundreds of prospects daily vs. manual research Difficulty Level: Advanced Estimated Build Time: 3-4 hours Monthly Operating Cost: ~$150 (Apollo + Apify + OpenAI + Email platform APIs) Watch My Complete Live Build Want to see me build this entire deep personalization system from scratch? I walk through every component live - including the AI prompting strategies, website scraping logic, error handling, and the exact techniques that generate 5-10% reply rates. 🎥 See My Live Build Process: "I Deep-Personalized 1000+ Cold Emails Using THIS AI System (FREE TEMPLATE)" This comprehensive tutorial shows the real development process - including advanced AI prompting, multi-page scraping architecture, and the proven icebreaker templates that have generated over $72K/month in agency revenue. Set Up Steps Apollo & Apify Integration: Configure Apify account with Apollo scraper access Set up API credentials and test lead extraction Define target audience parameters and lead qualification criteria Google Sheets Database Setup: Create multi-sheet structure (Search URLs, Leads) Configure proper column mappings for lead data Set up Google Sheets API credentials and permissions Website Scraping Infrastructure: Configure HTTP request nodes with proper redirect handling Set up error handling for websites that can't be scraped Implement intelligent batching with split-in-batches nodes AI Content Processing: Set up OpenAI API credentials with appropriate rate limits Configure dual-AI approach (page summarization + icebreaker generation) Implement token limiting to control processing costs Advanced Icebreaker Generation: Configure sophisticated AI prompting with system messages Set up example-based learning with input/output pairs Implement formatting rules for natural-sounding personalization Quality Control & Testing: Test complete workflow with small prospect batches Validate AI output quality and personalization accuracy Monitor response rates and optimize messaging templates Advanced Optimizations Scale the system with: Industry-Specific Templates:** Customize icebreaker formats for different verticals A/B Testing Framework:** Test different AI prompt variations and templates CRM Integration:** Automatically add qualified responders to sales pipelines Response Tracking:** Monitor which personalization elements drive highest engagement Multi-Touch Sequences:** Create follow-up campaigns based on initial response data Important Considerations AI Token Management:** System includes intelligent token limiting to control OpenAI costs Scraping Ethics:** Built-in delays and error handling prevent website overload Data Quality:** Filtering logic ensures only high-quality prospects with accessible websites Scalability:** Batch processing prevents IP blocking during high-volume scraping Why This System Works The key to 5-10% reply rates lies in making prospects believe you've done extensive manual research: Non-obvious details from deep website analysis Natural language patterns that avoid template detection Company name abbreviation (e.g., "Love AMS" vs "Love AMS Professional Services") Multiple page insights aggregated into compelling narratives Check Out My Channel For more advanced automation systems and proven business-building strategies that generate real revenue, explore my YouTube channel where I share the exact methodologies used to build successful automation agencies.
by Ajith joseph
🤖 Create a Telegram Bot with Mistral AI and Conversation Memory A sophisticated Telegram bot that provides AI-powered responses with conversation memory. This template demonstrates how to integrate any AI API service with Telegram, making it easy to swap between different AI providers like OpenAI, Anthropic, Google AI, or any other API-based AI model. 🔧 How it works The workflow creates an intelligent Telegram bot that: 💬 Maintains conversation history for each user 🧠 Provides contextual AI responses using any AI API service 📱 Handles different message types and commands 🔄 Manages chat sessions with clear functionality 🔌 Easily adaptable to any AI provider (OpenAI, Anthropic, Google AI, etc.) ⚙️ Set up steps 📋 Prerequisites 🤖 Telegram Bot Token (from @BotFather) 🔑 AI API Key (from any AI service provider) 🚀 n8n instance with webhook capability 🛠️ Configuration Steps 🤖 Create Telegram Bot Message @BotFather on Telegram Create new bot with /newbot command Save the bot token for credentials setup 🧠 Choose Your AI Provider OpenAI: Get API key from OpenAI platform Anthropic: Sign up for Claude API access Google AI: Get Gemini API key NVIDIA: Access LLaMA models Hugging Face: Use inference API Any other AI API service 🔐 Set up Credentials in n8n Add Telegram API credentials with your bot token Add Bearer Auth/API Key credentials for your chosen AI service Test both connections 🚀 Deploy Workflow Import the workflow JSON Customize the AI API call (see customization section) Activate the workflow Set webhook URL in Telegram bot settings ✨ Features 🚀 Core Functionality 📨 Smart Message Routing**: Automatically categorizes incoming messages (commands, text, non-text) 🧠 Conversation Memory**: Maintains chat history for each user (last 10 messages) 🤖 AI-Powered Responses**: Integrates with any AI API service for intelligent replies ⚡ Command Support**: Built-in /start and /clear commands 📱 Message Types Handled 💬 Text Messages**: Processed through AI model with context 🔧 Commands**: Special handling for bot commands ❌ Non-text Messages**: Polite error message for unsupported content 💾 Memory Management 👤 User-specific chat history storage 🔄 Automatic history trimming (keeps last 10 messages) 🌐 Global state management across workflow executions 🤖 Bot Commands /start 🎯 - Welcome message with bot introduction /clear 🗑️ - Clears conversation history for fresh start Regular text 💬 - Processed by AI with conversation context 🔧 Technical Details 🏗️ Workflow Structure 📡 Telegram Trigger - Receives all incoming messages 🔀 Message Filtering - Routes messages based on type/content 💾 History Management - Maintains conversation context 🧠 AI Processing - Generates intelligent responses 📤 Response Delivery - Sends formatted replies back to user 🤖 AI API Integration (Customizable) Current Example (NVIDIA): Model: mistralai/mistral-nemotron Temperature: 0.6 (balanced creativity) Max tokens: 4096 Response limit: Under 200 words 🔄 Easy to Replace with Any AI Service: OpenAI Example: { "model": "gpt-4", "messages": [...], "temperature": 0.7, "max_tokens": 1000 } Anthropic Claude Example: { "model": "claude-3-sonnet-20240229", "messages": [...], "max_tokens": 1000 } Google Gemini Example: { "contents": [...], "generationConfig": { "temperature": 0.7, "maxOutputTokens": 1000 } } 🛡️ Error Handling ❌ Non-text message detection and appropriate responses 🔧 API failure handling ⚠️ Invalid command processing 🎨 Customization Options 🤖 AI Provider Switching To use a different AI service, modify the "NVIDIA LLaMA Chat Model" node: 📝 Change the URL in HTTP Request node 🔧 Update the request body format in "Prepare API Request" node 🔐 Update authentication method if needed 📊 Adjust response parsing in "Save AI Response to History" node 🧠 AI Behavior 📝 Modify system prompt in "Prepare API Request" node 🌡️ Adjust temperature and response parameters 📏 Change response length limits 🎯 Customize model-specific parameters 💾 Memory Settings 📊 Adjust history length (currently 10 messages) 👤 Modify user identification logic 🗄️ Customize data persistence approach 🎭 Bot Personality 🎉 Update welcome message content ⚠️ Customize error messages and responses ➕ Add new command handlers 💡 Use Cases 🎧 Customer Support**: Automated first-line support with context awareness 📚 Educational Assistant**: Homework help and learning support 👥 Personal AI Companion**: General conversation and assistance 💼 Business Assistant**: FAQ handling and information retrieval 🔬 AI API Testing**: Perfect template for testing different AI services 🚀 Prototype Development**: Quick AI chatbot prototyping 📝 Notes 🌐 Requires active n8n instance for webhook handling 💰 AI API usage may have rate limits and costs (varies by provider) 💾 Bot memory persists across workflow restarts 👥 Supports multiple concurrent users with separate histories 🔄 Template is provider-agnostic - easily switch between AI services 🛠️ Perfect starting point for any AI-powered Telegram bot project 🔧 Popular AI Services You Can Use | Provider | Model Examples | API Endpoint Style | |----------|---------------|-------------------| | 🟢 OpenAI | GPT-4, GPT-3.5 | https://api.openai.com/v1/chat/completions | | 🔵 Anthropic | Claude 3 Opus, Sonnet | https://api.anthropic.com/v1/messages | | 🔴 Google | Gemini Pro, Gemini Flash | https://generativelanguage.googleapis.com/v1beta/models/ | | 🟡 NVIDIA | LLaMA, Mistral | https://integrate.api.nvidia.com/v1/chat/completions | | 🟠 Hugging Face | Various OSS models | https://api-inference.huggingface.co/models/ | | 🟣 Cohere | Command, Generate | https://api.cohere.ai/v1/generate | Simply replace the HTTP Request node configuration to switch providers!
by Thomas Janssen
Build a 100% local RAG with n8n, Ollama and Qdrant. This agent uses a semantic database (Qdrant) to answer questions about PDF files. Tutorial Click here to view the YouTube Tutorial How it works Build a chatbot that answers based on documents you provide it (Retrieval Augmented Generation). You can upload as many PDF files as you want to the Qdrant database. The chatbot will use its retrieval tool to fetch the chunks and use them to answer questions. Installation Install n8n + Ollama + Qdrant using the Self-hosted AI starter kit Make sure to install Llama 3.2 and mxbai-embed-large as embeddings model. How to use it First run the "Data Ingestion" part and upload as many PDF files as you want Run the Chatbot and start asking questions about the documents you uploaded