by Yannick
🚀 How it works (Fonctionnement résumé) : Ce template permet de transformer un document (PDF, TXT, DocX...) en post LinkedIn engageant, prêt à être publié ou validé par email, le tout avec l’aide d’une IA spécialisée en copywriting LinkedIn. Voici les étapes clés : Formulaire de dépôt : L'utilisateur charge un fichier ou colle un texte. Détection du type de contenu : Un Switch analyse le type de fichier (PDF, DOCX, TXT, ou texte brut). Attention pour DocX nécessite un compte Make pour transformer le doc (mais cela fonctionne aussi sans docX) Extraction du contenu : Selon le format, le bon module d'extraction est utilisé. Génération d’un post LinkedIn : L'IA transforme le contenu en post LinkedIn selon une méthodologie de copywriting optimisée. Validation par email : Un email est envoyé à l’utilisateur pour approbation avec possibilité d’ajouter une image. Publication automatique : Si l'utilisateur valide, le post est publié sur LinkedIn. ⚙️ Setup Steps : Connecte tes comptes : Google Docs OAuth LinkedIn OAuth OpenAI (via gpt-4.1-mini ou un autre modèle) SMTP + IMAP pour l'envoi et la lecture d'emails Configure les champs du formulaire dans le nœud Form Trigger selon ton usage. Personnalise le prompt IA dans le nœud AI Agent si tu veux adapter le ton ou la méthodologie. Vérifie les emails dans le nœud d'envoi (Send Email) et de lecture (Email Trigger (IMAP)), pour que la validation fonctionne. Teste le workflow avec différents fichiers pour t'assurer que tous les types sont bien traités (PDF, DOCX, TXT, etc.). 🧩 Cas d’usage typiques : Créer des posts à partir de notes de réunion ou de rapports. Valoriser un article ou une publication professionnelle sous forme de contenu LinkedIn. Déléguer à l'IA le premier jet de ton contenu réseau. Bonus surveille une newsletter de ta messagerie pour proposer un post pertinent sur LinkedIn (vous pouvez supprimer il fonctionne en parallèle)
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? The Search Engine Intelligence Extractor is a powerful n8n automation that leverages Bright Data’s MCP based AI Agents to simulate human-like searches across Google, Bing, and Yandex, and then distills clean, structured insights using Google Gemini. This workflow is tailored for: SEO analysts researching competitors or market trends Market researchers needing real-time search visibility Journalists & content writers gathering contextual insights AI developers creating intelligent assistants Digital marketers tracking brand mentions or news What problem is this workflow solving? Traditional scraping of search engines is often blocked, cluttered, or filled with irrelevant information. Manually analyzing and cleaning this data for insight is time-consuming. This workflow solves the problem by: Simulating real user search behavior via Bright Data MCP based AI Agent Performing multi-platform search (Google, Bing, Yandex) in one unified flow Extracting clean, human-readable results (stripping ads, navigation, etc.) Structuring the content using Google Gemini LLM Automating delivery via Webhook or saving to disk What this workflow does Input Fields Node: Accepts the search query Accepts action for example - Perform a google search. Replace the action with bing, yandex etc. for other search providers Accepts Webhook notification URL Bright Data MCP Agent Execution: Triggers Bright Data’s intelligent search agent Handles search navigation, result loading, pagination Human Readable Data Extractor: Cleanses HTML, removes ads, footers, irrelevant links Produces a readable narrative of results Final Output Handling: Saves the processed response to disk Sends the structured data to a Webhook for real-time use Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Add Scheduled Execution Add a Cron trigger to run this workflow on a set schedule (e.g., daily/weekly keyword tracking). Push Results to Custom Destinations Connect output to: Google Sheets (for analytics or dashboards) PostgreSQL or MySQL DBs (for structured storage) Notion or Airtable (for content pipelines) Slack or Email (for alerting teams) Customize Webhook Notifications Update the Webhook URL in the notification node to push processed results to external APIs, CRMs, or real-time dashboards.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for? This workflow template enables intelligent data extraction from ProductHunt using Bright Data’s Model Context Protocol (MCP) and processes search results with Google Gemini. This workflow is designed for individuals and teams who need automated, intelligent discovery and analysis of new tech products. It's especially valuable for: Startup Analysts & VC Researchers Growth Hackers & Marketers Recruiters & Tech Scouts Product Managers & Innovation Teams AI & Automation Enthusiasts What problem is this workflow solving? Traditional product discovery on ProductHunt is constrained by limited descriptions and requires repeated manual validation through web searches. Manually extracting and enriching this data is slow, repetitive, and error-prone. This workflow solves the problem by: Extracting real-time ProductHunt data using Bright Data’s MCP infrastructure to mimic real-user behavior and avoid blocks. Performing contextual searches on Google for a specific product on ProductHunt to gather use cases, reviews, and related information. Structuring results using Google Gemini LLM to provide human-readable insights and reduce noise. Delivering results seamlessly by saving output to disk, updating Google Sheets, and sending Webhook alerts. What this workflow does Input Field Node Define the ProductHunt category with the search term(s) you want to target. This is used to drive extraction and search operations. Agent Operation Node The agent performs two major tasks: Extract from ProductHunt Retrieves trending products from ProductHunt using Bright Data MCP Contextual Google Search for the product the agent searches Google for deeper context, including: Reviews Competitor mentions Real-world usage examples LLM Node (Google Gemini) Analyzes and summarizes extracted web content Removes noise (ads, menus, etc.) Structures content into bullet points, insights, or JSON objects Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs This workflow is flexible and modular, allowing you to adapt it for various research, product discovery, or trend analysis use cases. Below are the key customization points and how to modify them. Define Your Target Products or Topics: Change the input parameter to a specific ProductHunt category, tag, or keyword (e.g., "AI tools", "SaaS", "DevOps") Change Output Destinations : Save to Disk**: Change the file format (.json, .csv, .md) or directory path Google Sheet**: Modify sheet name, structure (columns like Product, Summary, Link) Webhook Notification**: Point to a Slack/Discord/CRM/Webhook URL with payload mapping
by explorium
Explorium Event-Triggered Outreach This n8n and agent-based workflow automates outbound prospecting by monitoring Explorium event data (e.g. product launches, new office opening, new investment and more), researching companies, identifying key contacts, and generating tailored sales emails leveraging the Explorium MCP server. Template Workflow Overview Node 1: Webhook Trigger Purpose: Listens for real-time product launch events pushed from Explorium's webhook system. How it works: Explorium sends HTTP POST requests containing event data The webhook payload includes company name, business ID, domain, product name, and event type Pay attention: Product launch is just one example. You can easily enroll to many more meaningful events. to learn about events and how to enroll to events, visit the events documentation. Node 2: Company Research Agent Agent Type: Tools Agent Purpose: Enrich company data after an event occurs. How it works: Uses Explorium MCP via the MCP Client tool to gather additional company data Uses Anthropic Claude (Chat Model) to process and interpret company information for downstream personalization Node 3: Employee Data Retrieval Purpose: Retrieve prospect-level data for targeting. How it works: Uses HTTP Request node to call Explorium's fetch_prospects endpoint Filters prospects by: Company business_id Departments: Product, R&D, etc... Seniority levels: owner, cxo, vp, director, senior, manager, partner, etc... Pay Attention: Follow our fetch prospect documentation for the full list of filter and best practice. Limits results to top 5 relevant employees Code nodes handle: Filtering logic Cleaning API response Formatting data for downstream agents Node 4: Conditional Branch - Prospect Data Check If Node: Checks whether prospect data was successfully retrieved Logic: If prospects found → personalized emails per person If no prospects → fallback to company-level general email Node 5A: Email Writer #1 (No Prospect Data) Agent Type: Tools Agent Purpose: Write generic outbound email using only company-level research and event info. Powered by: Anthropic Chat Model Node 5B: Loop Over Prospects → Email Writer #2 (Personalized) Agent Type: Tools Agent Purpose: Write highly personalized email for each identified employee. How it works: Loops through each individual prospect Passes company research + employee data to LLM agent Generates customized emails referencing: Prospect's title & department Product launch Role-relevant Explorium value proposition Node 6: Slack Notifications Purpose: Posts completed emails to internal Slack channel for review or testing before final deployment. Future State: Can be swapped with an email sequencing platform in production. Setup Requirements Explorium API Access MCP Client credentials for company enrichment and prospect fetching Registered webhook for event listening Get explorium api key n8n Configuration Secure environment variables for API keys & webhook secret Code nodes configured for JSON transformation, filtering & signature validation Customization Options Personalization Logic Update LLM prompt instructions to reflect ICP priorities Modify email templates based on role, department, or tenure logic Adjust fallback behavior when prospect data is unavailable API Request Tuning Adjust page_size for number of prospects retrieved Fine-tune seniority and department filters to match evolving targeting Future Expansion Swap Slack notifications for outbound email automation Integrate call task assignment directly into CRM Introduce engagement scoring feedback loop (opens, clicks, replies) Troubleshooting Tips Validate webhook signature matching to prevent unauthorized requests Ensure correct business_id is passed to prospect fetching endpoint Confirm business enrichment returns sufficient data for company researcher agents Review agent LLM responses for correct output structure and parsing consistency
by Ibrahim Malick
⚠️ This template uses only official n8n nodes. No community nodes required. 🧑💼 Who is this for? This workflow is designed for: Legal tech founders Marketing freelancers or consultants Agencies supporting lawyers and small law firms Anyone doing outbound outreach in the legal niche ❓ What problem is this solving? LinkedIn is a goldmine for targeting legal professionals — but scraping and personalizing outreach is tedious and expensive. Most tools either: Require paid LinkedIn Sales Navigator Can’t personalize at scale Violate LinkedIn’s TOS This workflow solves that by using free Google Search, OpenRouter AI, and GPT-4o to find, enrich, and message up to 1,000 solo lawyers per day — without using browser automation or scrapers. ⚙️ What this workflow does Uses Google Programmable Search to find solo lawyers and small firm founders on LinkedIn Parses each profile’s name, title, profile URL, and snippet Saves raw lead data to Google Sheets Uses OpenRouter Sonar Pro to enrich each profile with external content Generates a personalized, 1-line message using GPT-4o Appends the final message into Google Sheets for outreach 🛠️ Setup Estimated time: 15–20 minutes ✅ Google Programmable Search Enable the Custom Search API on Google Cloud Create a programmable search engine set to search the full web Copy your API key and CX ID ✅ Google Sheets Create a sheet with columns: Name, Title, Profile URL, Outreach Message Share the sheet with your OAuth-connected Google account ✅ OpenRouter Sign up at openrouter.ai Fund with at least $5 and generate your API key Use the model perplexity/sonar-pro for real-time research ✅ GPT-4o (optional) You can use your OpenAI key or route GPT-4o via OpenRouter All setup-specific values are marked clearly in sticky notes and placeholders. 🛠️ How to customize this workflow to your needs Change the Google search query to match your industry (e.g., "founder" AND "therapist" site:linkedin.com/in) Modify the AI prompt to match your tone (formal, casual, humorous) Connect the final output to your CRM (like HubSpot, Airtable, etc.) Add a second outreach message variant to A/B test performance 📌 Sticky Notes & Annotations All nodes are clearly renamed for understandability (e.g., Find Lawyer Profiles, Parse LinkedIn Search Results) Color-coded sticky notes explain: Setup instructions Required credentials Use case 🗂 Category AI Sales Marketing
by Jay Hartley
What this template does This workflow will collect order data as it is produced, then send a summary email of all orders at the end of every day, formatted in a table. It receives new orders via webhook and stores in Airtable. At 7PM every day, it sends a summary email with the day's orders in a HTML table Setup: Instructions Video Create a new table in Airtable and give it a field time with type date, orderID with type number, and orderPrice also with type number. Create a new access token if you haven't already at https://airtable.com/create/tokens/new. Make sure to give the token the scopes data.records:read, data.records:write, schema.bases:read and access to whichever table you choose to store the orders. A pop-up window appears with the token. Use this token to make Create New Credential > Access Token for Airtable in the Store Order and Airtable Get Today's Orders nodes. Create access credentials for your Gmail as described here: https://developers.google.com/workspace/guides/create-credentials. Use the credentials from your client_secret.json in the Send to Gmail node. In the Store Order node, change Base and Table to the database and table in your Airtable account you wish to use to store orders. Make sure to use these same values in the Airtable Get Today's Orders node. Every time an order is created in your system, send a POST request to Webhook from your order software. Each request must contain a single order containing fields 'orderID' and 'orderPrice' (or, edit Set Order Fields to select which incoming fields you wish to save) Change the schedule time for sending email from Everyday at 7PM to whichever time you choose. Test: Activate the workflow. From the node Webhook, copy Production URL Send the following CURL request to the URL given to you: curl -X POST -H "Content-Type: application/json" -d '{"orderID": 12345, "orderPrice": 99.99}' YOUR_URL_HERE It should say Node executed successfully. Now check your Airtable and confirm the order was stored in the right place.
by Joachim Hummel
This n8n workflow automates posting Amazon affiliate products to Mastodon — complete with image upload, description, and a shortened tracking URL using Shlink. 🔧 How it works Input Source: The workflow starts by reading from a connected Google Sheet that contains: SHlink (Shortlink) Amazon Link Description (Optional) PicURL Send /NO or YES A Send column (used as a flag to check if the row was already posted) Image Upload: It fetches the product image via HTTP and uploads it directly to a Mastodon instance via the /media API endpoint. URL Shortening (Shlink): The original Amazon URL is shortened using your self-hosted or cloud-hosted Shlink instance to enable click tracking and better presentation. Text Generation: A two-line promotional text is automatically generated using a Language Model (LLM), based on the product description. Posting to Mastodon: The post is then published on Mastodon with: The image The generated text The shortened Shlink URL Row Update: Once published, the Send column in the Google Sheet is updated to "YES" to prevent duplicates. Requirements ✅ Shlink – Required for shortening and tracking Amazon URLs ✅ Google Sheet – Used as a product queue and post ✅ Google Sheet Example https://link.unixweb.home64.de/w7VqY ✅ Mastodon account – OAuth2 credentials with write scope ✅ Product image URL – Must be valid and accessible ✅ n8n credentials – Set up for Google Sheets, Mastodon, and optionally OpenRouter or other LLM providers This workflow is ideal for content creators, affiliate marketers, and automation fans who want to save time and optimize reach across the Fediverse. #affiliate #amazon #mastodon #advertisment
by DigiMetaLab
A smart personal assistant that can reason, search, calculate, and remember — powered by Google Gemini and ready in one click. Most AI agents only respond — this one thinks before replying, pulls in real-time facts, does the math, and even remembers the last 5 things you said. 🔧 How it works This template builds a conversational agent using the Google Gemini API. It uses multiple tools like: 🧠 Think – to reason step-by-step 🔍 SerpAPI – to search live data on Google ➗ Calculator – to solve math problems 💾 Memory – to remember short-term chat history You can embed this agent into a chatbot, web app, or automate any customer support, research, or productivity workflow. 🧠 Your agent will: Understand what you're asking Think logically using the Think tool Search facts in real-time using SerpAPI Calculate numbers using a math engine Recall past context using a memory buffer And respond clearly — just like a real assistant. 🧑💼 Who is this template for? This template is ideal for: Creators & developers building AI agents Teams needing a Gemini-powered assistant Beginners exploring LangChain + n8n Anyone curious about combining LLMs + tools + memory 🚀 How to set up Plug in your Google Gemini API key Add your SerpAPI key Run the workflow and start chatting! Everything is pre-wired for you — just import and go. 📬 Use cases You can connect this agent to: Telegram bots 🤖 WhatsApp via Twilio 📱 Slack, Discord, or Gmail 💬 Or just trigger it inside n8n manually 🔁 👉 Check out my other templates https://n8n.io/creators/digimetalab
by Alexander K.
Transform your creative sparks into professional Instagram Reel scripts instantly! This AI-powered workflow takes your raw ideas (text or voice messages) via Telegram and generates complete, viral-ready Reel scenarios with hooks, scripts, captions, and visual concepts. Who is this template for? This template is perfect for: Content creators looking to streamline their Reel production process Social media managers who need to generate multiple Reel concepts quickly Marketing professionals seeking data-driven, psychology-based content strategies Influencers and entrepreneurs who want to maintain consistent, engaging content Small business owners looking to create viral marketing content without hiring expensive copywriters Anyone who struggles with writer's block or wants to improve their Instagram engagement What this template does This comprehensive workflow provides a complete Reel creation assistant that: 🎯 Accepts Multiple Input Types: Text messages with your Reel ideas Voice notes that get automatically transcribed Processes ideas in real-time through Telegram 🧠 AI-Powered Content Generation: Creates 3 attention-grabbing hook variants designed to stop the scroll Generates a complete 30-60 second script with Hook, Subtitle, Body, and Call-to-Action Writes engaging captions that complement (not repeat) your video content Provides specific visual concepts with cinematic direction for filming 📊 Smart Features: Memory system that remembers your conversation context for personalized suggestions Optional Google Sheets integration to automatically log and organize all your Reel ideas Error handling for seamless user experience Instant delivery of results back to your Telegram chat 🎨 Professional Quality Output: Scripts based on proven marketing psychology from industry legends Hooks designed using viral content strategies Visual concepts that are specific and actionable (not generic "film yourself" advice) Captions optimized for engagement and shareability Sample Results Input (Idea): How I Saved 10 Hours a Week with Blog Automation? Output: 💡 Hook (variants): "Blogging doesn't have to be a time-suck. Here's my secret…" "Is blogging eating up your spare time? Let's fix that!" "Unlock 10 hours a week AND keep your blog thriving!" 🎬 Script: Hook: "Blogging doesn't have to be a time-suck. Here's my secret…" Subtitle: "Maximize your time with blog automation hacks!" Body: "Picture this: writing, editing, posting, and promoting your blog without breaking a sweat. I was buried under endless tasks until I discovered blog automation. Scheduling posts, auto-publishing, automating social shares—it's a game-changer. 10 hours a week, back in my pocket! More time for creativity or even a break. Imagine what automation could do for your content game." CTA: "Which blog task do you wish was automated? Drop a comment!" 📝 Reel Caption: Blog automation isn't just convenience—it's freedom. What will you create with your extra time? 📸 Visual Idea: Open with a whirlwind of papers and sticky notes symbolizing chaos. Transition to a person seamlessly typing on a laptop, where blog posts are auto-scheduled. Quick cuts show blog shares and responses happening automatically. Conclude with a serene scene: the person outdoors, notebook in hand, jotting ideas peacefully on a sunny day. Setup Instructions Prerequisites: Telegram account OpenAI API account with GPT-4 access Google account (optional, for logging ideas) Step 1: Create Your Telegram Bot Open Telegram and search for @BotFather Send /newbot and follow the instructions to create your bot Save the Bot Token you receive - you'll need this for n8n Send /setprivacy to @BotFather, select your bot, and choose "Disable" to allow the bot to read all messages Step 2: Get Your OpenAI API Key Visit OpenAI's API platform Create an account or log in Navigate to API Keys section Create a new API key and save it securely Ensure you have access to GPT-4 models (required for optimal results) Step 3: Configure the Workflow Import this template into your n8n instance Set up Telegram credentials: Add your Bot Token to all Telegram nodes Test the connection Configure OpenAI credentials: Add your API key to the "OpenAI Chat Model" and "Transcribes audio" nodes Verify GPT-4o model access Optional - Google Sheets setup: Create a new Google Sheet with columns: Status, Date, Description, Script Connect your Google account to the "Google Sheets" node Select your spreadsheet and sheet Step 4: Activate and Test Click "Activate" in the top-right corner of your workflow Open Telegram and find your bot Send a test message like "Create a Reel about morning routines" Verify you receive a complete Reel scenario response Step 5: Start Creating! Send text ideas directly to your bot Record voice notes with your concepts Receive professional Reel scenarios within seconds Use the optional Google Sheets integration to build your content library Pro Tips: Be specific with your ideas for better results The AI remembers your conversation context, so you can refine ideas iteratively Voice messages work great for capturing spontaneous ideas on-the-go Review the generated visual concepts - they're designed to be immediately actionable Troubleshooting: Ensure your OpenAI account has sufficient credits Verify your Telegram bot privacy settings allow message reading Check that all credentials are properly configured and tested For Google Sheets issues, confirm the sheet structure matches the expected columns This template transforms the tedious process of content creation into an instant, AI-powered system that delivers professional-quality Reel scenarios whenever inspiration strikes!
by Samuel Kimutai
How it works Automatically generates trending LinkedIn content topics using AI Researches current industry angles and hooks Writes posts in your authentic voice using OpenAI Creates professional images with DALL-E Posts everything on schedule without manual intervention Set up steps Connect OpenAI API for content generation and image creation Link LinkedIn API for automated posting Configure scheduling triggers (daily/weekly posting) Customize prompts to match your writing style and industry Set up content approval workflows (optional) Results you can expect 400% increase in profile views within 3 weeks Generate 120+ posts per month vs manual 12 posts Free up 15+ hours weekly for revenue-generating activities Consistent posting schedule that builds audience engagement Professional content that converts followers to clients Time to set up: 30-45 minutes Technical level: Beginner to intermediate APIs required: OpenAI, LinkedIn API Cost: OpenAI usage fees only (approximately $5-15/month) This workflow transforms LinkedIn content creation from a time-consuming daily task into a fully automated system that works while you sleep. Perfect for entrepreneurs, marketers, and content creators who want consistent LinkedIn presence without the manual effort.
by Paul
AI Database Assistant with Smart Query's & PostgreSQL Integration Description: 🚀 Transform Your Database into an Intelligent AI Assistant This workflow creates a smart database assistant that safely handles natural language queries without crashing your system. Features dual-agent architecture with built-in query limits and PostgreSQL optimization – perfect for commercial applications! ✅ Ideal for: SaaS developers building database search features 🔍 Database administrators providing safe AI access 🛡️ Business teams needing user-friendly data queries 📊 Anyone wanting ChatGPT-like database interaction 🤖 🔧 How It Works 1️⃣ User asks a question – "Show me top 10 popular products" 2️⃣ Main AI Agent – Interprets the request and ensures safety limits 3️⃣ SQL Sub-Agent – Generates precise PostgreSQL queries 4️⃣ Database executes – Returns formatted, limited results safely ⚡ Setup Instructions 1️⃣ Prepare Your Database Ensure PostgreSQL is accessible from n8n Note your table structure and column names Set up database connection credentials 2️⃣ Customize the Templates Replace [YOUR_TABLE_NAME] with your actual table name Update [YOUR_FIELDS] with your column names Modify examples to match your use case Important**: Keep all LIMIT clauses intact! 3️⃣ Configure the Agents Copy Main Agent system message to your primary AI node Copy Sub-Agent system message to your SQL generator node Connect the sub-workflow between both agents 4️⃣ Test & Deploy Test with sample queries like "Show me 5 recent items" Verify query limits work (max 50 results) Deploy and monitor performance 🎯 Why Use This Workflow? ✔️ System Protection – Built-in limits prevent crashes from large queries ✔️ Natural Language – Users ask questions in plain English ✔️ Commercial Ready – Generic templates work with any database ✔️ Dual-Agent Safety – Smart interpretation + precise SQL generation ✔️ PostgreSQL Optimized – Handles complex schemas and data types 🚨 Critical Features Query Limits**: Default 10, maximum 50 results (can be modified) Error Prevention**: No unlimited data retrieval Smart Routing**: Natural language → Safe SQL → Formatted results Customizable**: Works with any PostgreSQL database schema 🔗 Start building your AI database assistant today – safe, smart, and scalable!
by Malik Hashir
Overview The n8n Telegram Gmail Assistant is an intelligent workflow that lets you search and retrieve specific Gmail emails simply by messaging a Telegram bot. Powered by advanced language models, it turns plain-language requests into precise Gmail searches, delivering results directly to your Telegram chat. This no-code automation is perfect for users who want instant, conversational access to their inbox—no Gmail tab required. Key Features Conversational Email Search: Just message the Telegram bot with requests like “Get me all emails from Amazon” or “Show unread emails after 6 June 2025.” The assistant understands sender names, keywords, and date filters—even if you only provide part of the information. AI-Powered Query Parsing: Uses a language model (LLM) to intelligently extract sender, keywords, and date range from your message, then builds an accurate Gmail search query. Flexible Filtering: Supports sender, keywords, ‘after’ and ‘before’ dates, or any combination. Handles both specific and broad queries. Instant Telegram Delivery: Each matching email is formatted with date, sender, subject, and a snippet, and sent as a separate Telegram message for easy reading. Customizable & Extendable: Swap the AI model (Google Gemini or OpenAI), adjust output formatting, and set email limits or read status as needed. How It Works User Sends a Telegram Message: For example, “Get unread emails from Amazon about invoices after 1 June 2025.” AI Interprets the Request: The workflow’s LLM agent extracts sender, keywords, and date filters, converting them into a Gmail search query using Gmail’s syntax (e.g., from:amazon AND (invoice OR invoices) AND after:2025/06/01). Gmail Search: The workflow fetches all matching emails from your connected Gmail account. Message Formatting: Each email is summarized into a concise, emoji-rich Telegram message (date, sender, subject, snippet). Telegram Delivery: Results are sent to your Telegram chat, one message per email. Setup Instructions Create a Telegram Bot: Use @BotFather on Telegram to create a bot and obtain the API token. Connect Telegram to n8n: Add your bot’s API token as a credential in n8n. Connect Gmail Account: Authorize your Gmail account in n8n, set email limits, and choose read/unread status preferences. Configure AI Model: Use your own Google Gemini or OpenAI API key, or select a preferred LLM node in the workflow. Deploy the Workflow: Activate the workflow and start messaging your Telegram bot to retrieve emails instantly. Value Proposition Save Time:** No need to open Gmail or remember search operators—just ask in plain language. Stay Organized:** Instantly filter and retrieve important emails, even on the go. User-Friendly:** No coding required, with clear setup steps and customizable options. Cost-Effective:** Available simply with an n8n subscription—no extra costs or hidden fees of anything. Enjoy the workflow Free Forever within your n8n plan.