by John Alejandro SIlva
🤖📨 Telegram AI Assistant with Multi-File Media Group Handling, Smart File Processing & PostgreSQL Integration > AI-powered Telegram bot for text, voice, video, documents & media — with database-driven grouping and Telegram-safe formatting. 📋 Description This n8n template creates a next-generation Telegram AI assistant 🧠💬 capable of handling text messages, media files, and documents with advanced processing, PostgreSQL integration, and AI-powered responses. It is designed to solve Telegram’s media group challenge 📦 — when multiple files are sent together, they are stored, processed, and combined into one coherent AI-generated reply. ✨ Key Features 📂 Multi-file media group management with PostgreSQL: media_group media_queue chat_histories 📑 Document parsing for CSV, HTML, ICS, JSON, ODS, PDF (with AI fallback), RTF, TXT, XML, and spreadsheets. 🎤 Voice & video transcription for AI analysis. 🖼️ Image, audio, and video description for richer AI context. 🛡️ Telegram-safe MarkdownV2 formatting with auto-splitting for messages over 4096 chars. ⚠️ Error fallback for unsupported file types. 💡 Acknowledgment A huge thank you to Ezema Gingsley Chibuzo 🙌 for the inspiration of the first version of this workflow: Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG Your pioneering work laid the foundation for this improved, database-powered multi-modal assistant 🚀 🏷 Tags telegram ai-assistant postgresql multi-file media-group file-processing voice-transcription document-parser pdf-extraction markdown-formatting n8n-template 💼 Use Case Use this template if you need an AI-powered Telegram bot that can: 📦 Handle multiple files sent in a single message (albums, multiple PDFs, etc.). 🧾 Extract & analyze content from many file formats. 🎙️ Transcribe voice and video messages. 🗂️ Maintain chat memory for contextual AI answers. 🛡️ Avoid Telegram formatting errors and length limit issues. This workflow automates the full chain: Receive → Process → AI Analysis → Telegram-safe Reply. 💬 Example User Interactions 📄 Multiple PDFs with a caption** → AI extracts and summarizes all PDFs in one combined reply. 🎤 Voice message** → AI transcribes and replies with a contextual answer. 📊 CSV or spreadsheet file** → AI parses and summarizes the data. 🖼️ Multiple images** → AI describes each image and replies in a single message. 🔑 Required Credentials Telegram Bot API** (Bot Token) PostgreSQL** (Connection credentials) AI Provider API** (OpenAI, Google Gemini, or compatible LLM) ⚙️ Setup Instructions 🗄️ Create the PostgreSQL tables (Gray section SQL): media_group media_queue chat_histories 🔌 Configure the Telegram Trigger with your bot token. 🤖 Connect your AI provider credentials. 🗂️ Set up PostgreSQL credentials in the database nodes. ▶️ Deploy the workflow in n8n. 🎯 Start sending messages and files to your bot. 📌 Extra Notes ✅ Green section ensures only one trigger per media group. 📌 Yellow section guarantees captions and files are stored in the correct sequence. ✨ Purple section formats AI output to be Telegram-safe and split if needed. 🧠 AI prompt is not fixed, allowing full customization. 💡 Need Assistance? If you’d like help customizing or extending this workflow, feel free to reach out: 📧 Email: johnsilva11031@gmail.com 🔗 LinkedIn: John Alejandro Silva Rodríguez
by Robert Breen
This n8n workflow finds experts on any topic, scrapes their websites, and pulls out contact emails automatically. Core services used: SerpAPI (google search) · Apify (website crawler) · OpenAI (GPT-4o email extraction). 🛠️ Step-by-Step Setup & Execution 1️⃣ Run Workflow (Manual Trigger) | Node | Type | Purpose | |------|------|---------| | Run Workflow | Manual Trigger | Start the workflow on demand while you test. | 2️⃣ Set Your Topic | Node | Type | How to configure | |------|------|------------------| | Set Topic | Set | Add a string field Topic – e.g. "n8n". This keyword drives every subsequent step. | 3️⃣ Search Google (Results 1-10) | Node | Type | API Credential | |------|------|----------------| | Search Google (top 10) | SerpAPI | Create SerpAPI credential1. Sign up → copy API key → n8n → Credentials → New → SerpAPI → paste.2. Select the credential in this node. | | Key Params | | | | q | | ={{ $json.Topic }} Expert | | location | | Region code (ex 585069efee19ad271e9c9b36) | | additionalFields.start | | "10" (Google position 1-10)| 4️⃣ Search Google (Results 11-20) | Node | Type | Notes | |------|------|-------| | Search Google (11-20) | SerpAPI (same credential) | Remove start or set to 20+ to fetch next page. | 5️⃣ Extract URL Lists | Node | Type | Script Purpose | |------|------|----------------| | Extract Url & Extract Url 2 | Code | Loop data.organic_results → output { title, link, displayed_link } for each result. | 6️⃣ Combine Both Result Sets | Node | Type | Details | |------|------|---------| | Append Results | Merge (combineAll) | Merges arrays from steps 3 & 4 into a single list for processing. | 7️⃣ Loop Over Every URL | Node | Type | Configuration | |------|------|---------------| | Loop Over Items1 | Split In Batches | Default batch = 1 (process one page at a time).onError = continueRegularOutput keeps loop alive on failures. | 8️⃣ Scrape Webpage Content (Apify) | Node | Type | API Credential | |------|------|----------------| | Scrape URL with apify | HTTP Request | Create Apify credential1. Sign up at https://console.apify.com2. Account → API tokens → copy.3. n8n → Credentials → New → HTTP Query Auth → set query param token=YOUR_TOKEN. | | Request Details | | | | Method | POST | | URL | https://api.apify.com/v2/acts/6sigmag~fast-website-content-crawler/run-sync-get-dataset-items | | JSON Body | 9️⃣ Extract Email with OpenAI | Node | Type | API Credential | |------|------|----------------| | Extract Email from webpage | LangChain Agent | Create OpenAI credential1. Generate key at https://platform.openai.com/account/api-keys2. n8n → Credentials → New → OpenAI API → paste key. | | Prompt (system) | | Output Parser | Structured Output Parser2 expects → { "email": "address OR null" } | 🔟 Loop Continues & Final Data The extracted result returns to Loop Over Items1 until every URL is processed. Typical final item JSON**: { "title": "How to Build n8n Workflows", "link": "https://example.com", "email": "info@example.com" } 💡 Optional Enhancements Idea How Save Leads Add a Google Sheets or Airtable node after the loop. Validate Emails Chain a ZeroBounce / Hunter.io verification API before saving. Parallel Crawling Increase SplitInBatches size (watch Apify rate limits). 🙋♂️ Need More Help? Robert Breen – Automation Consultant & n8n Expert 📧 robert.j.breen@gmail.com 🔗 https://www.linkedin.com/in/robert-breen-29429625/ 🌐 https://ynteractive.com
by Avkash Kakdiya
How it works This workflow enhances contact intelligence by retrieving new or updated contact data, enriching it using AI and external APIs, and then updating your CRM or contact management system with intelligent insights. It automates the process of gathering, enriching, and organizing contact information to improve targeting, personalization, and engagement. Step-by-step 1. Trigger & Input The workflow is triggered by a scheduler or webhook event. It reads a new contact entry (or an updated one) from your source, such as a spreadsheet or form. Basic fields like name, email, and company are used as the starting point for enrichment. 2. Contact Lookup & Parsing The contact's domain or company is extracted and used to perform a lookup via an external data source. Data such as company details, job title, or LinkedIn profile is retrieved. Parsed and cleaned to remove duplicates, missing values, or invalid results. 3. AI Enrichment The enriched contact is passed through an AI model (such as GPT or another NLP service). The model analyzes job role, seniority, and inferred interests based on available data. Insights like intent, persona category, or engagement score are generated. 4. Validation & Tagging The AI-enriched data is validated to ensure consistency and accuracy. Tags and segments (e.g., "Decision Maker", "Technical Buyer", etc.) are assigned based on rules or AI inference. This enables smart filtering, targeting, and routing later in your CRM or campaigns. 5. Output & Integration The final enriched and validated contact is written back to your CRM, sheet, or marketing platform. The system also: Sends a Slack/Email alert with a summary. Updates the original contact entry with a "Processed" or "Enriched" status. Triggers next steps, such as personalized outreach or nurture sequences. Benefits Enhances Contact Profiles with AI-generated insights and third-party data. Improves Segmentation & Targeting through smart tags and persona classification. Automates Manual Research, saving time and improving accuracy. Easily Extendable by adding more AI models, data sources, or CRM integrations.
by Matt Chong
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who is this for? This workflow is for anyone who receives invoices by email and wants to stay on top of payment deadlines without manual tracking. What problem is this workflow solving? Invoices often get buried in your inbox. This workflow uses AI to find them, extracts key details, and adds a task to remind you to pay before it’s overdue. No more missed payments. No more manual tracking. How it works? This workflow is triggered on a schedule (By default, every hour). It checks your Gmail inbox for unread messages. Each email is passed to an AI agent (using OpenAI), which decides whether it’s an invoice. If an invoice is found: A task is created in your Google Tasks with the payment reminder and due date. The email is labeled (for tracking) and marked as read. If not an invoice: The email is skipped (no action taken). How to set up? Connect these services in your n8n credentials: Gmail (OAuth2) OpenAI Google Tasks Create Gmail label Go to Gmail and create a label named Invoice. This label will be applied to processed invoice emails. Choose your Google Task list In the task creation node, select the correct task list for your reminders. Set the schedule In the Schedule Trigger node, choose how often it should check your inbox. How to customize this workflow to your needs? Change the Gmail label** Update the label applied to emails after they are processed. Edit the AI prompt** Adjust the system prompt in the OpenAI node if your invoices follow a unique format. Update the task format** Modify the task title and notes to suit how you like your reminders to look. Adjust the schedule** Run it more or less frequently based on how many invoices you receive.
by Yaron Been
Automated monitoring system that tracks startup activities, funding events, and company updates in real-time, providing valuable market intelligence. 🚀 What It Does Real-time monitoring of startup activities Funding alerts and updates Competitor tracking Industry trend analysis Customizable watchlists 🎯 Perfect For Venture capitalists Startup founders Business development teams Market researchers Investment analysts ⚙️ Key Benefits ✅ Stay ahead of market movements ✅ Never miss important funding rounds ✅ Track competitor activities ✅ Identify emerging trends ✅ Save hours of manual research 🔧 What You Need Crunchbase API access n8n instance Notification preferences (email/Slack/Teams) 📊 Data Points Tracked New funding rounds Company updates Leadership changes Product launches Market expansions 🛠️ Setup & Support Quick Setup Deploy in 20 minutes with our step-by-step configuration guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Stay informed about the startup ecosystem with automated monitoring and alerts. Make data-driven decisions with timely, relevant information.
by Robert Breen
This no-code n8n workflow finds recent Instagram posts by hashtag, scrapes profile data, and uses an AI agent to evaluate whether each account is a good collaboration lead. The workflow filters based on the number of followers and the content of their bio, and outputs structured reasoning for outreach decisions. Perfect for creators, marketers, or business developers looking to automate influencer or community partnership prospecting—especially in niche ecosystems like n8n. ✅ Key Features 🔍 Hashtag Discovery**: Finds recent Instagram posts from a specified hashtag (e.g., #n8n) 👤 Account Scraping**: Retrieves profile details such as follower count and biography 🧠 AI Evaluation**: Uses OpenAI and LangChain to determine if the profile is a good fit for outreach 📦 Structured Output**: Returns a JSON object with "Yes/No" lead status and reasoning 🛠️ Manual Execution**: Run on demand using the manual trigger 🧰 What You'll Need | Tool / API | Purpose | Setup Steps | |-------------------------|------------------------------------------|-------------| | Apify Account | To access Instagram scraping actors | Create account → Generate API Token → Use in httpQueryAuth credential in n8n | | OpenAI API Key | To power the AI decision-making agent | Sign up at OpenAI → Create API key → Paste into OpenAI credential in n8n | | LangChain Plugin for n8n | AI Orchestration with System Message | Install LangChain nodes from Community Nodes (already installed in this workflow) | 🔧 Step-by-Step Setup 1️⃣ Manual Trigger Node**: When clicking ‘Execute workflow’ Use**: Allows you to run the workflow manually while testing. 2️⃣ Define Hashtag Node**: Create Search Term Value**: Sets "n8n" as the default Instagram hashtag to scan. You can edit this to any other hashtag you'd like. 3️⃣ Find Recent Posts Node**: Find Recent Posts API**: Apify Instagram Hashtag Scraper Auth Setup**: Go to your Apify Console Click “Create new token” In n8n, create a new HTTP Query Auth credential Set token in the token query param (e.g., ?token=yourTokenHere) Choose the credential in this node 4️⃣ Scrape Each Profile Node**: Scrape Accounts API**: Apify Instagram Profile Scraper Body**: JSON with usernames from the hashtag search Note**: Uses the same httpQueryAuth credential as the previous node. 5️⃣ Extract Fields Node**: Set bio and follower count What it does**: Extracts biography and followersCount from the profile JSON and stores them in clean variables for AI input. 6️⃣ AI Lead Scoring Node**: AI Agent Purpose**: Uses GPT-4o-mini to analyze the bio and follower count Prompt Details**: 7️⃣ AI Model Node**: OpenAI Chat Model Model**: gpt-4o-mini Credential**: Connect your OpenAI account via API Key. Go to OpenAI API Keys Copy your key and create a new OpenAI API credential in n8n. 8️⃣ Output Parser Node**: Structured Output Parser What it does**: Parses the response from the AI into structured JSON for further use (e.g., storing leads, sending to Airtable, etc.) 🧪 Sample Output { "lead status": "Yes", "Reasoning": "The user has 3.5k followers and their bio shows they build automations with n8n." } 📬 Need More Help? If you'd like assistance setting this up, customizing it to your niche, or expanding it to score and store leads automatically — I can help! 👤 Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn
by explorium
Explorium Prospects Search Chatbot Template Download the following json file and import it to a new n8n workflow: mcp\_to\_prospects\_to\_csv.json Overview This n8n workflow creates a chatbot that understands natural language requests for finding business prospects and automatically: Interprets your query using AI (Claude Sonnet 3.7) Converts it to proper Explorium API filters Validates the API request structure Fetches prospect data from Explorium Exports results as a downloadable CSV file Perfect for sales teams, recruiters, and business development professionals who need to quickly find and export targeted prospect lists without learning complex API syntax. Key Features Natural Language Interface**: Simply describe who you're looking for in plain English Smart Query Translation**: AI converts your request to valid API parameters Built-in Validation**: Ensures API calls meet Explorium's requirements Error Recovery**: Automatically retries with corrections if validation fails Pagination Support**: Handles large result sets automatically CSV Export**: Clean, formatted output ready for CRM import Conversation Memory**: Maintains context for follow-up queries Example Queries The chatbot understands queries like: "Find marketing directors at SaaS companies in New York with 50-200 employees" "Get me CTOs from fintech startups in California" "Show me sales managers at healthcare companies with revenue over $10M" "Find engineers at Microsoft with 3-5 years experience" "Get customer service leads from e-commerce companies in Europe" Prerequisites Before setting up this workflow, ensure you have: n8n instance with chat interface enabled Anthropic API key for Claude Explorium API credentials (Bearer token) - Get explorium api key Basic understanding of n8n chat workflows Supported Filters The chatbot can search using these criteria: Company Filters Size**: 1-10, 11-50, 51-200, 201-500, 501-1000, 1001-5000, 5001-10000, 10001+ employees Revenue**: Ranges from $0-500K up to $10T+ Age**: 0-3, 3-6, 6-10, 10-20, 20+ years Location**: Countries, regions, cities Industry**: Google categories, NAICS codes, LinkedIn categories Name**: Specific company names Prospect Filters Job Level**: CXO, VP, Director, Manager, Senior, Entry, etc. Department**: Sales, Marketing, Engineering, Finance, HR, etc. Experience**: Total months and current role duration Location**: Country and region codes Contact Info**: Filter by email/phone availability Installation & Setup Step 1: Import the Workflow Copy the workflow JSON from the template In n8n: Workflows → Add Workflow → Import from File Paste the JSON and click Import Step 2: Configure Anthropic Credentials Click on the Anthropic Chat Model1 node Under Credentials, click Create New Add your Anthropic API key Name: "Anthropic API" Save credentials Step 3: Configure Explorium Credentials You'll need to set up Explorium credentials in two places: For MCP Client: Click on the MCP Client node Under Credentials, create new Header Auth Add your authentication header (usually Authorization: Bearer YOUR_TOKEN) Save credentials For API Calls: Click on the Prospects API Call node Use the same Header Auth credentials created above Verify the API endpoint is correct Step 4: Activate the Workflow Save the workflow Click the Active toggle to enable it The chat interface will now be available Step 5: Access the Chat Interface Click on the When chat message received node Copy the webhook URL Access this URL in your browser to start chatting How It Works Workflow Architecture Chat Trigger: Receives natural language queries from users Memory Buffer: Maintains conversation context AI Agent: Interprets queries and generates API parameters Validation: Checks API structure against Explorium requirements API Call: Fetches prospect data with pagination Data Processing: Formats results for CSV export File Conversion: Creates downloadable CSV file Processing Flow User Query → AI Interpretation → Validation → API Call → CSV Export ↑ ↓ └──── Error Correction Loop ←──────┘ Validation Rules The workflow validates: Filter keys are allowed by Explorium API Values match expected formats (e.g., valid country codes) Range filters have proper gte/lte values No duplicate values in arrays Required structure is maintained Usage Guide Basic Conversation Flow Start with your query: "Find me VPs of Sales at software companies in the US" Bot processes and responds: Generates API filters Validates the structure Fetches data Returns CSV download link Refine if needed: "Can you also include directors and filter for companies with 100+ employees?" Query Tips Be specific**: Include job titles, departments, company details Use standard terms**: "CTO" instead of "Chief Technology Officer" Specify locations**: Use country names or standard codes Include size/revenue**: Helps narrow results effectively Advanced Queries Combine multiple criteria: "Find engineering managers and senior engineers at B2B SaaS companies in New York and California with 50-500 employees and revenue over $5M who have been in their role for at least 1 year" Output Format The CSV file includes: Prospect ID Name (first, last, full) Location (country, region, city) LinkedIn profile Experience summary Skills and interests Company details Job information Business ID Troubleshooting Common Issues "Validation failed" errors Check that your query uses supported filter values Ensure location names are spelled correctly Verify company sizes/revenues match allowed ranges No results returned Broaden your search criteria Check if the company exists in Explorium's database Verify filter combinations aren't too restrictive Chat not responding Ensure workflow is activated Check all credentials are properly configured Verify webhook URL is accessible Large result sets timing out Try adding more specific filters Limit results by location or company size Use the size parameter (max 10,000) Error Messages The bot provides clear feedback: Invalid filters**: Shows which filters aren't supported Value errors**: Lists correct options for each field API failures**: Explains connection or authentication issues Performance Optimization Best Practices Start broad, then narrow: Begin with basic criteria and add filters Use business IDs: When targeting specific companies Limit by contact info: Add has_email: true for actionable leads Batch by location: Process regions separately for large searches API Limits Maximum 10,000 results per search Pagination handles up to 100 records per page Rate limits apply based on your Explorium subscription Customization Options Modify AI Behavior Edit the AI Agent system message to: Change response format Add custom filters Adjust interpretation logic Include additional instructions Extend Functionality Add nodes to: Send results via email Import directly to CRM Schedule recurring searches Create custom reports Integration Ideas Connect to Slack for team queries Add to CRM workflows Create lead scoring systems Build automated outreach campaigns Security Considerations API credentials are stored securely in n8n Chat sessions are isolated No prospect data is stored permanently CSV files are generated on-demand Support Resources For issues with: n8n platform**: Check n8n documentation Explorium API**: Contact Explorium support Anthropic/Claude**: Refer to Anthropic docs Workflow logic**: Review node configurations
by Yang
🧾 What this workflow does This workflow takes a reference ad image and brand website, then uses GPT-4, LangChain, and Dumpling AI to generate 10 high-quality image variations for ad testing. These image variations are visually consistent but subtly different in background, mood, lighting, and tone — perfect for performance testing on platforms like Meta Ads or TikTok. 👤 Who is this for DTC marketers and brand designers testing ad creatives Creative teams automating visual experimentation Content agencies using AI for fast ad mockups Performance marketers running multivariate testing ⚙️ How to set up ✅ Requirements You’ll need the following tools set up in n8n: Google Drive (OAuth2 credential) Google Sheets (OAuth2 credential) OpenAI API (for GPT-4 or GPT-4o) Dumpling AI API (via HTTP header authentication) 🛠️ Steps to configure Google Sheet Setup Create a sheet with one column: Image URL Update the Sheet ID and tab name in the final Google Sheets node. Drive Setup Create a folder in Google Drive for storing the reference image. Replace the folderId in the “Upload Ad Image to Google Drive” node. Dumpling AI API Key Use n8n’s credential manager (HTTP Header Auth) — do not hardcode the key. OpenAI API Key Required for both image description and LangChain agent prompt generation. Form Inputs Required Brand Name Brand Website Ad Image (upload field) 🧠 How it works A user submits the brand name, website, and a reference ad image through a form. The image is uploaded to Google Drive. GPT-4o describes the image’s visual style (e.g., mood, lighting, composition). GPT-4 analyzes the brand’s website to define its visual aesthetic. A LangChain agent uses both analyses to create 10 tightly scoped variation prompts. Dumpling AI generates a new image for each prompt using its “FLUX.1-pro” model. Each new image’s link is logged into Google Sheets. 🛠️ How to customize 🧪 Change prompt logic to experiment with different variations (e.g., theme, season). 🎨 Switch image model in Dumpling AI to one that supports your desired style. 🔗 Log additional metadata (prompt, timestamp) to Google Sheets. 📤 Connect output images to Airtable, Notion, or a review tool like Figma. 🎯 Modify GPT system message to reflect a different tone or brand strategy. This workflow gives creative teams and marketers an instant, AI-powered ad image testing system — built on real brand visuals, not generic stock content.
by Humble Turtle
Architecture Agent Overview The Architect Agent listens to Slack messages and generates full data architecture blueprints in response. Powered by Claude 3.5 (Anthropic) for reasoning and design, and Tavily for real-time web search, this agent creates production-ready data pipeline scaffolds on-demand — transforming natural language prompts into structured data engineering solutions. Capabilities Understands and interprets user requests from Slack Designs end-to-end data pipelines architectures using industry best practices. Outputs include High-level architecture diagrams Required Connections To operate correctly, the following integrations must be in place: Slack API Token with permission to read messages and post responses Tavily API Key for external search functionality Claude 3.5 API Access via Anthropic Detailed configuration instructions are provided in the workflow Setup time <15 minutes Example input: "Create a data pipeline orchestrated by Airflow, running on a Docker image. It should connect to a MySQL database, load in the data into a PostgreSQL DB (incremental load) and then transform the data into business-oriented tables also in the PostgreSQL database. Create an example setup with raw sales data." Customising this workflow Try saving outputs to Google Drive to store all your architecture blueprints
by Onur
Effortless Task Management: Create Todoist Tasks Directly from Telegram with AI This n8n workflow empowers you to seamlessly manage your tasks by creating Todoist entries directly from Telegram, using the power of AI. Simply send a voice or text message to your Telegram bot, and this workflow will transform it into actionable tasks in your Todoist account. Who is this for? Busy professionals** who need a quick and easy way to capture tasks on the go. Students** looking to streamline their assignments and project management. Anyone** who wants to leverage AI for effortless task management. What Problem Does it Solve? This workflow eliminates the need to manually enter tasks into Todoist. It automates the process of capturing, organizing, and prioritizing tasks, saving you time and effort. What are the Benefits? Seamless Integration:** Connect your Telegram and Todoist accounts for a frictionless workflow. AI-Powered Task Breakdown:** LLM AI intelligently analyzes your messages and breaks them down into manageable sub-tasks. Voice-to-Task:** Create tasks with voice messages for hands-free convenience. Increased Productivity:** Capture and organize tasks quickly, keeping you focused and productive. Accessibility:** Access your tasks from anywhere with Todoist's mobile app and Google extension. How it Works Send a message: Send a voice or text message describing your task to your Telegram bot. AI analysis: The workflow uses an LLM (OpenAI Chat Model) to analyze your message and break it down into sub-tasks. Task creation: The workflow creates tasks in your Todoist account based on the AI's analysis. Notification: You receive a Telegram notification with a link to your newly created tasks in Todoist. Nodes in the Workflow Telegram Trigger:** Listens for incoming messages on Telegram. Switch:** Routes messages based on their type (voice or text). Telegram:** Fetches voice messages from Telegram. OpenAI:** Transcribes voice messages to text using OpenAI's Whisper API. Edit Fields:** Prepares the text for the LLM. Basic LLM Chain:** Analyzes messages and generates sub-tasks using OpenAI's GPT model. Structured Output Parser:** Extracts sub-tasks from the LLM's response. Todoist:** Creates tasks in your Todoist account. Telegram:** Sends a notification with a link to your Todoist tasks. Requirements Active n8n instance. Telegram account with a bot. Todoist account. OpenAI API key. Setup Information Import the workflow JSON into your n8n instance. Configure the Telegram Trigger node with your bot token. Set up the OpenAI credentials with your API key. Connect your Todoist account in the Todoist node. Customize the LLM prompt (optional) to fine-tune task creation. Additional Tips Explore Todoist's features to further organize and manage your tasks. Experiment with different LLM prompts to optimize task breakdown. Use n8n's features to automate other aspects of your workflow. This workflow combines the convenience of Telegram with the power of AI and Todoist to provide a seamless task management experience. Start managing your tasks effortlessly today!
by Joseph
This n8n workflow automates SEO keyword research by querying the Ahrefs API for keyword data and related keyword insights. The enriched data is then processed by an AI agent to format a response and provide valuable SEO recommendations. Perfect for SEO specialists, content marketers, digital agencies, and anyone looking to gain valuable insights into keyword opportunities to boost their rankings. ⚙️ How This Workflow Works This workflow guides you through the entire SEO keyword research process, from entering the initial keyword to receiving detailed insights and related keyword suggestions. 1. 🗣️ User Input (Keyword Query) The user enters a keyword they want to research. This input is captured by the Chat Input Node, ready for analysis. 2. 🤖 AI Agent (Input Verification) The AI Agent reviews the keyword input for any grammatical errors or extra commentary. If necessary, it cleans the input to ensure a seamless query to the API. 3. 🔑 Ahrefs API (Keyword Data Retrieval) The cleaned keyword is sent to the Ahrefs Keyword Tool API. This retrieves a detailed report including metrics like search volume, keyword difficulty, and CPC. 4. 💡 Related Keywords Extraction (Using JavaScript Function) The workflow uses a JavaScript function to extract main keyword data and 10 related keywords data from the Ahrefs response. You can tweak the script to adjust the number of related keywords or the level of detail you want. 5. 🧠 AI Agent (Text Formatting) The aggregated data, including both the main keyword and related keywords, is sent to an AI agent. The AI agent formats the data into a concise, readable format that can be shared with the user. 6. 📨 Final Response The formatted text is delivered to the user with keyword insights, recommendations, and related keyword suggestions. ✅ Smart Retry & Error Handling Each subworkflow includes a fail-safe mechanism to ensure: ✅ Proper error handling for any issues with the API request. 🕒 Failed API requests are retried after a customizable period (e.g., 2 hours or 1 day). 💬 User input validation prevents any incorrect or malformed queries from being processed. 📋 Ahrefs API Setup To use this workflow, you’ll need to set up your Ahrefs API credentials: 🔑 Ahrefs API Sign up for an Ahrefs account and get your key here: Ahrefs Keyword Tool API Once signed up, you'll receive an API key, which you’ll use in the x-rapidapi-key header in n8n. Ensure you check the Ahrefs Keyword Tool API documentation for more details on available parameters. 📥 How to Import This Workflow Copy the json code. Open your n8n instance. Open a new workflow. Paste anywhere inside the workflow. Voila. 🛠️ Customization Options Adjust the number of related keywords extracted (default is 10). Customize the AI agent response formatting or add specific recommendations for users. Modify the JavaScript function to extract different metrics from the Ahrefs API. 🧪 Use Case Example Trying to optimize your blog post around a specific keyword? Query a broad keyword, like “SEO tips”. Get related keyword data and search volume insights. Use the AI agent to provide keyword recommendations and additional topics to target. 💥 Boost your content strategy with fresh keywords and relevant search data!
by Jimleuk
This n8n template imports an XLSX containing terms dates for a university, extracts the relevant events using AI and converts the events to an ICS file which can be imported into iCal, Google Calendar or Outlook. Manually adding important term dates to your calendar by hand? Stop! Automate it with this simple AI/LLM-powered document understanding and extraction template. This cool use-case can be applied to many scenarios where Excel files are predominantly used. How it works The term dates excel file (xlsx) are imported into the workflow from the university's website using the http request node. To parse the excel file, we use an external service - Cloudflare's Markdown Conversion Service. This converts the excel's sheets into markdown tables which our LLM can read. To extract the events and their dates from the markdown, we can use the Information Extractor node for structured output. LLMs are great for this use-case because they can understand the layout; one row may have many data points. With our data, there are endless possibilities to use it! But for this demonstration, we'll generate an ICS file so that we can import the extracted events into our calendar. We use the Python code node to combine the events into the ICS spec and the "Convert to File" node to create the ICS binary. Finally, let's distribute the ICS file by email to other students or instructors who may also find this incredibly helpful for the upcoming semester! How to use Ensure you're downloading the correct excel file and amend the URL parameter of the "Get Term Dates Excel" as necessary. Update the gmail node with your email or other emails as required. Alternatively, send the ICS file to Google Drive or a student portal. Requirements Cloudflare Account is required to use the Markdown Conversion Service. Gemini for LLM document understanding and extraction. Gmail for email sending. Customising the workflow This template should work for other Excel files which - for a university - there are many. Some will be more complicated than others so experiment with different parsers and extraction tools and strategies.