by Lucas Peyrin
How it works This template is an interactive, step-by-step tutorial designed to teach you the most important skill in n8n: using expressions to access and manipulate data. If you know what JSON is but aren't sure how to pull a specific piece of information from one node and use it in another, this workflow is for you. It starts with a single "Source Data" node that acts as our filing cabinet, and then walks you through a series of lessons, each demonstrating a new technique for retrieving and transforming that data. You will learn how to: Access a simple value from a previous node. Use n8n's built-in selectors like .last() and .first(). Get a specific item from a list (Array). Drill down into nested data (Objects). Combine these techniques to access data in an array of objects. Go beyond simple retrieval by using JavaScript functions to do math or change text. Inspect data with utility functions like Object.keys() and JSON.stringify(). Summarize data from multiple items using .all() and arrow functions. Set up steps Setup time: 0 minutes! This workflow is a self-contained tutorial and requires no setup or external credentials. Click "Execute Workflow" to run the entire tutorial. Follow the flow from the "Source Data" node to the "Final Exam" node. For each lesson, click on the node to see how its expressions are configured in the parameters panel. Read the detailed sticky note next to each lesson—it breaks down exactly how the expression works and why. By the end, you'll have the foundational knowledge to connect data and build powerful, dynamic workflows in n8n.
by scrapeless official
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This advanced automation builds a fully autonomous SEO blog writer using n8n, Scrapeless, LLMs, and Pinecone vector database. It’s powered by a Retrieval-Augmented Generation (RAG) system that collects high-performing blog content, stores it in a vector store, and then generates new blog posts based on that knowledge—endlessly. Part 1: Build a Knowledge Base from Popular Blogs Scrape existing articles** from a well-established writer (in this case, Mark Manson) using the Scrapeless node. Extract content from blog pages* and store it in *Pinecone**, a powerful vector database that supports similarity search. Use Gemini Embedding 001** or any other supported embedding model to encode blog content into vectors. Result**: You’ll have a searchable vector store of expert-level content, ready to be used for content generation and intelligent search. Part 2: SERP Analysis & AI Blog Generation Use Scrapeless' SERP node to fetch search results based on your keyword and search intent. Send the results to an LLM (like Gemini, OpenRouter, or OpenAI) to generate a keyword analysis report in Markdown → then converted to HTML. Extract long-tail keywords, search intent insights, and content angles from this report. Feed everything into another LLM with access to your Pinecone-stored knowledge base, and generate a fully SEO-optimized blog post. Set up steps Prerequisites Scrapeless API key Pinecone account and index setup An embedding model (Gemini, OpenAI, etc.) n8n instance with Community Node: n8n-nodes-scrapeless installed Credential Configuration Add your Scrapeless and Pinecone credentials in n8n under the "Credentials" tab Choose embedding dimensions according to the model you use (e.g., 768 for Gemini Embedding 001) Key Highlights Clones a real content creator**: Replicates knowledge and writing style from top-performing blog authors. Auto-scrapes hundreds of blog posts** without being blocked. Stores expert content** in a vector DB to build a reusable knowledge base. Performs real-time SERP analysis** using Scrapeless to fetch and analyze search data. Generates SEO blog drafts** using RAG with detailed keyword intelligence. Output includes**: blog title, HTML summary report, long-tail keywords, and AI-written article body. RAG + SEO: The Future of Content Creation This template combines: AI reasoning** from large language models Reliable data scraping** from Scrapeless Scalable storage** via Pinecone vector DB Flexible orchestration** using n8n nodes This is not just an automation—it’s a full-stack SEO content machine that enables you to: Build a domain-specific knowledge base Run intelligent keyword research Generate traffic-ready content on autopilot 💡 Use Cases SaaS content teams cloning competitor success Affiliate marketers scaling high-traffic blog production Agencies offering automated SEO content services AI researchers building personal knowledge bots Writers automating first-draft generation with real-world tone
by Sheryl
Description This workflow provides a powerful AI assistant for content creators, book editors, and marketers. It automates the collection and analysis of trending discussions from Reddit, YouTube, and X (Twitter), generating insightful topic reports. This frees you from hours of tedious data compilation, allowing you to make faster, more accurate topic decisions based on deep AI analysis. How it works This workflow simulates the complete research process of a strategic editor: Initiate & Collect: A user submits a keyword via a public Form Trigger. The workflow then automatically fetches relevant, trending content in parallel from the official APIs of Reddit, YouTube, and X (Twitter). Multi-stage AI Processing & Analysis: The workflow utilizes a layered AI pipeline to process the data. First, a lightweight Gemini model in the AI Pre-filter Content node rapidly screens the vast amount of content to filter out noise. Next, a more powerful Gemini Pro model in the AI Deep Analysis node performs a detailed, structured analysis on each high-value item, extracting summaries, sentiment, and key arguments. Finally, a "strategist" AI model in the AI Synthesize Final Report node aggregates all analyses to generate the comprehensive final topic report in HTML. Multi-Channel Report Distribution: The workflow distributes the final report to multiple channels based on pre-defined templates. The Send Gmail Report node sends the complete HTML report. The Send Feishu Notification node sends a concise summary card to a group chat. Meanwhile, the Archive to Google Sheets node archives key data. Setup Steps This workflow takes approximately 20-30 minutes to set up, with most of the time spent connecting your accounts. Connect Your API Accounts: In the n8n Credentials section, you will need to prepare and connect credentials for the following services: Google: For the Gemini AI model, Gmail sending, and Google Sheets archiving. This requires a Google Cloud API Key and OAuth2 credentials. Reddit: For fetching Reddit posts. This requires a Reddit account with OAuth2 configured in n8n to allow searches. YouTube: For collecting YouTube videos. You'll need to enable the YouTube Data API v3 in your Google Cloud Console and get an API Key. Twitter: For the official Twitter node, requiring a free developer account and an App with v2 API access. Configure Output Channels: In the final nodes (Send Gmail Report, Send Feishu Notification, Archive to Google Sheets), update the recipient email address, the Feishu bot's Webhook URL, and the target spreadsheet ID to match your own. Activate and Share the Trigger: Activate the workflow. The first Form Trigger node will automatically generate a public URL. Share this link with your team members to let them start using the tool.
by Yang
📝 Description 🤖 What this workflow does This workflow turns Reddit pain points into emotionally-driven comic-style ads using AI. It takes in a product description, scrapes Reddit for real user pain points, filters relevant posts using AI, generates ad angles, rewrites them into 4-panel comic prompts, and finally uses Dumpling AI to generate comic-style images. All final creatives are uploaded to Google Drive. 🧠 What problem is this solving? Crafting ad content that truly speaks to customer struggles is time-consuming. This workflow automates that entire process — from pain point discovery to visual creative output — using AI and Reddit as a source of truth for customer language. 👤 Who is this for? Copywriters and performance marketers Startup founders and indie hackers Creatives building empathy-driven ad concepts Automation experts looking to generate scroll-stopping content ⚙️ Setup Instructions Here’s how to set everything up, step by step: 🔹 1. Trigger: Form Input Node: 📝 Form - Submit Product Info This form asks the user to enter: Brand Name Website Product Description ✅ Make sure this form is active and testable. 🔹 2. Generate Reddit Keyword Node: 🧠 GPT-4o - Generate Reddit Keyword Uses the product description to generate a search keyword based on what your audience might be discussing on Reddit. 🔹 3. Search Reddit Node: 🔍 Reddit - Search Posts Uses the keyword to search Reddit for relevant threads. Make sure your Reddit integration is properly configured. 🔹 4. Filter Valid Posts Node: 🔎 IF - Check Upvotes & Text Length Filters out low-effort or unpopular posts. Only keeps posts with: Minimum 2 upvotes Content at least 100 characters long ✅ You can adjust these thresholds in the node settings. 🔹 5. Clean Reddit Output Node: 🧼 Code - Structure Reddit Posts This formats the list of posts into clean JSON for the AI agents to process. 🔹 6. Check Relevance with AI Agent Node: 🤔 Langchain Agent - Post Relevance Classifier This node uses a LangChain agent (tool: think2) to determine if each post is relevant to your product. Only relevant ones are passed forward. 🔹 7. Aggregate Relevant Posts Node: 📦 Code - Merge Relevant Posts Collects all relevant posts into a clean format for the next GPT-4 call. 🔹 8. Generate Ad Angles Node: ✍️ GPT-4o - Generate Emotional Ad Angles Writes 10 pain-point-based marketing angles using real customer language. 🔹 9. Rank the Best Angles Node: 📊 GPT-4o - Rank Top 10 Angles Scores the generated angles and ranks them from most to least powerful. Only the top 3 are passed forward. 🔹 10. Turn Angles into Comic Prompts Node: 🎭 GPT-4o - Write Comic Scene Prompts Rewrites each of the top ad angles into a 4-panel comic strip structure (pain → tension → product → resolution). 🔹 11. Generate Comic Images Node: 🎨 Dumpling AI - Generate Comic Panels Sends each prompt to Dumpling AI to create visual comic scenes. 🔹 12. Wait for Image Generation Node: ⏳ Wait - Dumpling AI Response Time Adds a delay to give Dumpling AI time to finish generating all images. 🔹 13. Get Final Image URLs Node: 🔗 Code - Extract Image URLs from Dumpling Response Extracts all image links for preview/download. 🔹 14. Upload to Google Drive Node: ☁️ Google Drive - Upload Comics Uploads the comic images to your chosen Google Drive folder. ✅ Update this node with your destination folder ID. 🔹 15. Log Final Output Optional You can extend the flow to log the image links, ad angles, and Reddit sources to Google Sheets, Airtable, or Notion depending on your use case. 🛠️ How to Customize ✏️ Adjust tone: Update GPT-4 system prompts to sound more humorous, emotional, or brand-specific. 🧵 Use different styles: Swap Dumpling AI image settings for ink sketch, manga, or cartoon renderings. 🔄 Change input source: Replace Reddit with X (Twitter), Quora, or YouTube comments. 📦 Store results differently: Swap Google Drive for Notion, Dropbox, or Airtable. This workflow turns real audience struggles into thumb-stopping comic content — automatically.
by David Olusola
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. MCP Gmail Workflow – AI-Powered Email Management ✨ What It Does A smart n8n workflow that connects Gmail with an AI agent (via MCP), letting you send, read, and organize emails using natural language. ⚙️ Key Features 🧠 AI Commands: “Send email to John about the budget” 📥 Inbox Control: Mark read/unread, apply/remove labels 🗂 Smart Organization: Auto-label based on content 🤖 MCP-Ready: Works with Claude, ChatGPT, etc. 🎯 Use Cases “📤 Send a follow-up to the client about yesterday’s meeting” “📬 Mark all newsletters as read and label ‘Newsletter’” “🧾 Summarize latest email from Sarah” “🗃 Label all Project X emails as ‘Project-X-2024’” “⭐ Find unread emails from my manager and mark as important” 🛠 Setup Guide 🔑 Prerequisites n8n (self-hosted or cloud) Gmail API credentials MCP-compatible AI (optional but powerful) 📥 1. Import Workflow Copy JSON → Open n8n → Import → Paste → Done ✅ 🔐 2. Gmail OAuth2 Setup Create Google project → Enable Gmail API Create OAuth2 creds → Add n8n redirect URI In n8n: Add Gmail OAuth2 → Paste Client ID/Secret → Connect 🧩 3. Update Credential References Find your credential ID in n8n Update each Gmail node with your ID 🧠 4. MCP Trigger (Optional) Use provided webhook URL in your AI system Send test prompts to verify connection 🧪 5. Test Key Actions ✅ “Send a test email” ✅ “Read latest email” ✅ “Label last email as ‘Test’” ✅ “Mark latest email as unread” ⚙️ 6. Advanced Tips Create custom labels in Gmail Use HTTPS + webhook auth Add retries and error handling in n8n 🧯 Troubleshooting ❗ Gmail Auth Error? → Re-auth and check redirect URI ❗ Webhook not firing? → Check endpoint + manual test ❗ Label errors? → Use correct label names or IDs ✅ Required Gmail Scopes: gmail.modify gmail.send 📈 Best Practices 🔁 Test regularly 🔒 Use minimal permissions 🏷 Consistent label naming 🔍 Monitor execution + webhook logs 🎉 You’re All Set! Control Gmail with your voice or text through AI. Make managing emails smarter, faster, and 100% automated 💌
by Luka Zivkovic
Complete Telegram Trivia Bot with AI Question Generation Build a fully-featured Telegram trivia bot that automatically generates fresh questions daily using OpenAI and tracks user progress with NocoDB. Perfect for communities, education, or entertainment! ✨ Key Features 🤖 AI Question Generation: Automatically creates 40+ new trivia questions daily across 8 categories 📊 Smart User Management: Tracks scores, prevents question repeats, maintains leaderboards 🎮 Game Mechanics: Star-based difficulty scoring, answer history, progress tracking 🏆 Competitive Elements: Real-time leaderboards with emoji rankings and user positioning 🛡️ Robust Architecture: Error handling, state management, and data validation 🚀 Perfect For Community Engagement**: Keep Telegram groups active with daily trivia challenges Educational Content**: Create learning experiences with categorized questions Business Applications**: Employee training, customer engagement, lead generation Personal Projects**: Learn n8n automation while building something fun 📱 Supported Commands /start - Welcome new users with setup instructions /question - Get personalized trivia questions (never repeats correctly answered ones) /score - View current points and statistics /leaderboard - See top 10 players with rankings /stats - Detailed accuracy and performance metrics /help - Complete command reference 🔧 How It Works User Journey: User sends /question command to bot System checks their answer history to avoid repeats Displays fresh question with multiple choice options Processes answer, updates score based on difficulty stars Saves complete answer history for future filtering AI Content Pipeline: Daily scheduler triggers question generation OpenAI creates 5 questions per category (8 categories total) Questions automatically saved to NocoDB with difficulty ratings Content includes explanations and proper formatting 🛠️ Set Up Steps Prerequisites: n8n instance (cloud or self-hosted) NocoDB database (free tier works) OpenAI API key (Not required if you want to add questions yourself) Telegram bot token Database Setup: Create 3 NocoDB tables with the exact field specifications provided in the sticky notes. The workflow includes complete schema documentation. Configuration Time: ~15 minutes for database setup + API keys Detailed Setup Instructions: All setup steps, database schemas, and configuration details are documented in the workflow's sticky notes for easy implementation. 📈 Advanced Features Question History Tracking**: Users never see correctly answered questions again Difficulty-Based Scoring**: 1-5 star rating system with corresponding points Category Management**: 8 different trivia categories for variety State Management**: Proper game flow with idle/waiting states Error Handling**: Graceful fallbacks for all edge cases Scalable Architecture**: Supports unlimited concurrent users 🎯 Business Applications Lead Generation**: Capture user data through engaging trivia Employee Training**: Create custom questions for onboarding Customer Engagement**: Keep users active in your Telegram community Educational Tools**: Subject-specific learning with progress tracking Event Activation**: Conferences, workshops, or team building 💡 Customization Options Modify question categories for your niche Adjust scoring systems and difficulty levels Add custom commands and features Integrate with other platforms or APIs Create specialized question sets 🔗 Get Started Ready to build your own AI-powered trivia bot? Start with n8n and follow the comprehensive setup guide included in this workflow template. Next Steps: Import this workflow template Follow the database setup instructions in sticky notes Configure your API credentials Test with sample questions Launch your trivia bot! Turn your friend group into trivia champions with AI-generated questions that spark friendly competition!
by Ranjan Dailata
Who this is for? This workflow is designed for: Marketing analysts, **SEO specialists, and content strategists who want automated intelligence on their online competitors. Growth teams** that need quick insights from SERP (Search Engine Results Pages) without manual data scraping. Agencies** managing multiple clients’ SEO presence and tracking competitive positioning in real-time. What problem is this workflow solving? Manual competitor research is time-consuming, fragmented, and often lacks actionable insights. This workflow automates the entire process by: Fetching SERP results from multiple search engines (Google, Bing, Yandex, DuckDuckGo) using Thordata’s Scraper API. Using OpenAI GPT-4.1-mini to analyze, summarize, and extract keyword opportunities, topic clusters, and competitor weaknesses. Producing structured, JSON-based insights ready for dashboards or reports. Essentially, it transforms raw SERP data into strategic marketing intelligence — saving hours of research time. What this workflow does Here’s a step-by-step overview of how the workflow operates: Step 1: Manual Trigger Initiates the process on demand when you click “Execute Workflow.” Step 2: Set the Input Query The “Set Input Fields” node defines your search query, such as: > “Top SEO strategies for e-commerce in 2025” Step 3: Multi-Engine SERP Fetching Four HTTP request tools send the query to Thordata Scraper API to retrieve results from: Google Bing Yandex DuckDuckGo Each uses Bearer Authentication configured via “Thordata SERP Bearer Auth Account.” Step 4: AI Agent Processing The LangChain AI Agent orchestrates the data flow, combining inputs and preparing them for structured analysis. Step 5: SEO Analysis The SEO Analyst node (powered by GPT-4.1-mini) parses SERP results into a structured schema, extracting: Competitor domains Page titles & content types Ranking positions Keyword overlaps Traffic share estimations Strengths and weaknesses Step 6: Summarization The Summarize the content node distills complex data into a concise executive summary using GPT-4.1-mini. Step 7: Keyword & Topic Extraction The Keyword and Topic Analysis node extracts: Primary and secondary keywords Topic clusters and content gaps SEO strength scores Competitor insights Step 8: Output Formatting The Structured Output Parser ensures results are clean, validated JSON objects for further integration (e.g., Google Sheets, Notion, or dashboards). 4. Setup Prerequisites n8n Cloud or Self-Hosted instance** Thordata Scraper API Key** (for SERP data retrieval) OpenAI API Key** (for GPT-based reasoning) Setup Steps Add Credentials Go to Credentials → Add New → HTTP Bearer Auth → Paste your Thordata API token. Add OpenAI API Credentials for the GPT model. Import the Workflow Copy the provided JSON or upload it into your n8n instance. Set Input In the “Set the Input Fields” node, replace the example query with your desired topic, e.g.: “Google Search for Top SEO strategies for e-commerce in 2025” Execute Click “Execute Workflow” to run the analysis. How to customize this workflow to your needs Modify Search Query Change the search_query variable in the Set Node to any target keyword or topic. Change AI Model In the OpenAI Chat Model nodes, you can switch from gpt-4.1-mini to another model for better quality or lower cost. Extend Analysis Edit the JSON schema in the “Information Extractor” nodes to include: Sentiment analysis of top pages SERP volatility metrics Content freshness indicators Export Results Connect the output to: Google Sheets / Airtable** for analytics Notion / Slack** for team reporting Webhook / Database** for automated storage Summary This workflow creates an AI-powered Competitor Intelligence System inside n8n by blending: Real-time SERP scraping (Thordata) Automated AI reasoning (OpenAI GPT-4.1-mini) Structured data extraction (LangChain Information Extractors)
by David Ashby
Complete MCP server exposing 4 AWS Cost and Usage Report Service API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add AWS Cost and Usage Report Service credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the AWS Cost and Usage Report Service API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://cur.{region}.amazonaws.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (4 total) 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Deletereportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.DeleteReportDefinition: Deletes the specified report. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Describereportdefinitions (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.DescribeReportDefinitions: Lists the AWS Cost and Usage reports available to this account. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Modifyreportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.ModifyReportDefinition: Allows you to programatically update your report preferences. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Putreportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.PutReportDefinition: Creates a new report using the description that you provide. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native AWS Cost and Usage Report Service API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 8 Bulk WHOIS API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Bulk WHOIS API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Bulk WHOIS API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://localhost:5000 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (8 total) 🔧 Batch (4 endpoints) • GET /batch: Get your batches • POST /batch: Create batch. Batch is then being processed until all provided items have been completed. At any time it can be get to provide current status with results optionally. • DELETE /batch/{id}: Delete batch • GET /batch/{id}: Get batch 🔧 Db (1 endpoints) • GET /db: Query domain database 🔧 Domains (3 endpoints) • GET /domains/{domain}/check: Check domain availability • GET /domains/{domain}/rank: Check domain rank (authority). • GET /domains/{domain}/whois: WHOIS query for a domain 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Bulk WHOIS API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 11 hashlookup CIRCL API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add hashlookup CIRCL API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the hashlookup CIRCL API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://hashlookup.circl.lu • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (11 total) 🔧 Bulk (2 endpoints) • POST /bulk/md5: Bulk Search MD5 Hashes • POST /bulk/sha1: Bulk Search SHA1 Hashes 🔧 Children (1 endpoints) • GET /children/{sha1}/{count}/{cursor}: Return children from a given SHA1. A number of element to return and an offset must be given. If not set it will be the 100 first elements. A cursor must be given to paginate over. The starting cursor is 0. 🔧 Info (1 endpoints) • GET /info: Get Database Info 🔧 Lookup (3 endpoints) • GET /lookup/md5/{md5}: Lookup MD5. • GET /lookup/sha1/{sha1}: Lookup SHA-1. • GET /lookup/sha256/{sha256}: Lookup SHA-256. 🔧 Parents (1 endpoints) • GET /parents/{sha1}/{count}/{cursor}: Return parents from a given SHA1. A number of element to return and an offset must be given. If not set it will be the 100 first elements. A cursor must be given to paginate over. The starting cursor is 0. 🔧 Session (2 endpoints) • GET /session/create/{name}: Create a session key to keep search context. The session is attached to a name. After the session is created, the header hashlookup_session can be set to the session name. • GET /session/get/{name}: Return set of matching and non-matching hashes from a session. 🔧 Stats (1 endpoints) • GET /stats/top: Get Top Queries 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native hashlookup CIRCL API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Ezema Kingsley Chibuzo
🧠 What It Does This n8n workflow turns your Telegram bot into a smart, multi-modal AI assistant that accepts text, documents, images, and audio messages, interprets them using OpenAI models, and responds instantly with context-aware answers. It integrates a Supabase vector database to store document embeddings and retrieve relevant information before sending a prompt to OpenAI — enabling a full RAG experience 💡 Why This Workflow? Most support bots can only handle basic text input. This workflow: Supports multiple input formats (voice, documents, images, text) Dynamically extracts and processes data from uploaded files Implements RAG by combining user input with relevant memory or vector-based context Delivers more accurate, relevant, and human-like AI responses. 👤 Who It's For Businesses looking to automate support using Telegram Freelancers or solopreneurs offering AI Chatbots for businesses. Creators building AI-powered bots for real use cases as it's great for Customer support knowledge, Legal or Policy document, long FAQs, Project documentation, and Product information retrieval. Devs or analysts exploring AI + multi-format input + vector memory. ⚙️ How It Works 🗂️ Knowledge Base Setup Run the “Add to Supabase Vector DB” workflow manually to upload a document from your google drive and embed it into your vector database. This powers the Telegram chatbot’s ability to answer questions using your content. 🔁 Telegram Message Routing Telegram Trigger captures the user message (Text, Image, Voice, Document) Message Router routes input by type using a Switch node Each type is handled separately: Voice → Translate recording to text (.ogg, .mp3) Image → Analyze image to text. Text → Sent directly to AI Agent (.txt). Document → Parsed (e.g. .docx to .txt) accordingly. 📎 Document Type Routing Before routing documents by type, the Supported Document File Types node first checks if the file extension is allowed. If not supported, it exits early with an error message — preventing unnecessary processing. Supported documents are then routed using the Document Router node, and converted to text for further processing. Supported Document File Types .jpg .jpeg .png .webp .pdf .doc .docx .xls .xlsx .json .xml. The text content is combined with stored memory and embedded knowledge using a RAG approach, enabling the AI to respond based on real uploaded data. 🧠 RAG via Supabase Uploaded documents are vectorized using OpenAI Embeddings. Embeddings are stored in Supabase with metadata. On new questions, the chatbot: Extracts question intent Queries Supabase for semantically similar chunks Ranks retrieved chunks to find the most relevant match. Injects them into the prompt for OpenAI. OpenAI generates a grounded response based on actual document content. Response is sent to the Telegram user with content awareness. 🛠 How to Set It Up Open n8n or your local/self-hosted instance. Import the `.json ` workflow file. Set up these credentials: Google drive API Key Telegram API (Bot Token) Guide OpenAI API Supabase API Key + Environment ConvertAPI API Key Postgres API Key Cohere API Key Add a prompt suited to your business. Add a custom AI agent prompt that reflects your business domain, tone, and purpose. This is very important. Without it, your agent won't know how best to respond. Activate the workflow. Start testing by sending a message or document to your Telegram bot.
by Marcial Ambriz
Remixed Backup your workflows to GitHub from Solomon's work. Check out his templates. How it works This workflow will backup your typebots to GitHub. It uses the Typebot API to export all typebots. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. In addition, it also checks if any flow have been deleted from typebot workspace. If a flow no longer exists in workspace, the corresponding file will be removed from the repository to keep everything in sync. Who is this for? People wanting to backup their typebots(flows) outside the server for safety purposes or to migrate to another server.